972 resultados para data link


Relevância:

30.00% 30.00%

Publicador:

Resumo:

As our world becomes increasingly interconnected, diseases can spread at a faster and faster rate. Recent years have seen large-scale influenza, cholera and ebola outbreaks and failing to react in a timely manner to outbreaks leads to a larger spread and longer persistence of the outbreak. Furthermore, diseases like malaria, polio and dengue fever have been eliminated in some parts of the world but continue to put a substantial burden on countries in which these diseases are still endemic. To reduce the disease burden and eventually move towards countrywide elimination of diseases such as malaria, understanding human mobility is crucial for both planning interventions as well as estimation of the prevalence of the disease. In this talk, I will discuss how various data sources can be used to estimate human movements, population distributions and disease prevalence as well as the relevance of this information for intervention planning. Particularly anonymised mobile phone data has been shown to be a valuable source of information for countries with unreliable population density and migration data and I will present several studies where mobile phone data has been used to derive these measures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This talk will present an overview of the ongoing ERCIM project SMARTDOCS (SeMAntically-cReaTed DOCuments) which aims at automatically generating webpages from RDF data. It will particularly focus on the current issues and the investigated solutions in the different modules of the project, which are related to document planning, natural language generation and multimedia perspectives. The second part of the talk will be dedicated to the KODA annotation system, which is a knowledge-base-agnostic annotator designed to provide the RDF annotations required in the document generation process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Community capacity is used to monitor socio-economic development. It is composed of a number of dimensions, which can be measured to understand the possible issues in the implementation of a policy or the outcome of a project targeting a community. Measuring community capacity dimensions is usually expensive and time consuming, requiring locally organised surveys. Therefore, we investigate a technique to estimate them by applying the Random Forests algorithm on secondary open government data. This research focuses on the prediction of measures for two dimensions: sense of community and participation. The most important variables for this prediction were determined. The variables included in the datasets used to train the predictive models complied with two criteria: nationwide availability; sufficiently fine-grained geographic breakdown, i.e. neighbourhood level. The models explained 77% of the sense of community measures and 63% of participation. Due to the low geographic detail of the outcome measures available, further research is required to apply the predictive models to a neighbourhood level. The variables that were found to be more determinant for prediction were only partially in agreement with the factors that, according to the social science literature consulted, are the most influential for sense of community and participation. This finding should be further investigated from a social science perspective, in order to be understood in depth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Big data nowadays is a fashionable topic, independently of what people mean when they use this term. But being big is just a matter of volume, although there is no clear agreement in the size threshold. On the other hand, it is easy to capture large amounts of data using a brute force approach. So the real goal should not be big data but to ask ourselves, for a given problem, what is the right data and how much of it is needed. For some problems this would imply big data, but for the majority of the problems much less data will and is needed. In this talk we explore the trade-offs involved and the main problems that come with big data using the Web as case study: scalability, redundancy, bias, noise, spam, and privacy. Speaker Biography Ricardo Baeza-Yates Ricardo Baeza-Yates is VP of Research for Yahoo Labs leading teams in United States, Europe and Latin America since 2006 and based in Sunnyvale, California, since August 2014. During this time he has lead the labs in Barcelona and Santiago de Chile. Between 2008 and 2012 he also oversaw the Haifa lab. He is also part time Professor at the Dept. of Information and Communication Technologies of the Universitat Pompeu Fabra, in Barcelona, Spain. During 2005 he was an ICREA research professor at the same university. Until 2004 he was Professor and before founder and Director of the Center for Web Research at the Dept. of Computing Science of the University of Chile (in leave of absence until today). He obtained a Ph.D. in CS from the University of Waterloo, Canada, in 1989. Before he obtained two masters (M.Sc. CS & M.Eng. EE) and the electronics engineer degree from the University of Chile in Santiago. He is co-author of the best-seller Modern Information Retrieval textbook, published in 1999 by Addison-Wesley with a second enlarged edition in 2011, that won the ASIST 2012 Book of the Year award. He is also co-author of the 2nd edition of the Handbook of Algorithms and Data Structures, Addison-Wesley, 1991; and co-editor of Information Retrieval: Algorithms and Data Structures, Prentice-Hall, 1992, among more than 500 other publications. From 2002 to 2004 he was elected to the board of governors of the IEEE Computer Society and in 2012 he was elected for the ACM Council. He has received the Organization of American States award for young researchers in exact sciences (1993), the Graham Medal for innovation in computing given by the University of Waterloo to distinguished ex-alumni (2007), the CLEI Latin American distinction for contributions to CS in the region (2009), and the National Award of the Chilean Association of Engineers (2010), among other distinctions. In 2003 he was the first computer scientist to be elected to the Chilean Academy of Sciences and since 2010 is a founding member of the Chilean Academy of Engineering. In 2009 he was named ACM Fellow and in 2011 IEEE Fellow.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Title: Data-Driven Text Generation using Neural Networks Speaker: Pavlos Vougiouklis, University of Southampton Abstract: Recent work on neural networks shows their great potential at tackling a wide variety of Natural Language Processing (NLP) tasks. This talk will focus on the Natural Language Generation (NLG) problem and, more specifically, on the extend to which neural network language models could be employed for context-sensitive and data-driven text generation. In addition, a neural network architecture for response generation in social media along with the training methods that enable it to capture contextual information and effectively participate in public conversations will be discussed. Speaker Bio: Pavlos Vougiouklis obtained his 5-year Diploma in Electrical and Computer Engineering from the Aristotle University of Thessaloniki in 2013. He was awarded an MSc degree in Software Engineering from the University of Southampton in 2014. In 2015, he joined the Web and Internet Science (WAIS) research group of the University of Southampton and he is currently working towards the acquisition of his PhD degree in the field of Neural Network Approaches for Natural Language Processing. Title: Provenance is Complicated and Boring — Is there a solution? Speaker: Darren Richardson, University of Southampton Abstract: Paper trails, auditing, and accountability — arguably not the sexiest terms in computer science. But then you discover that you've possibly been eating horse-meat, and the importance of provenance becomes almost palpable. Having accepted that we should be creating provenance-enabled systems, the challenge of then communicating that provenance to casual users is not trivial: users should not have to have a detailed working knowledge of your system, and they certainly shouldn't be expected to understand the data model. So how, then, do you give users an insight into the provenance, without having to build a bespoke system for each and every different provenance installation? Speaker Bio: Darren is a final year Computer Science PhD student. He completed his undergraduate degree in Electronic Engineering at Southampton in 2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An emerging consensus in cognitive science views the biological brain as a hierarchically-organized predictive processing system. This is a system in which higher-order regions are continuously attempting to predict the activity of lower-order regions at a variety of (increasingly abstract) spatial and temporal scales. The brain is thus revealed as a hierarchical prediction machine that is constantly engaged in the effort to predict the flow of information originating from the sensory surfaces. Such a view seems to afford a great deal of explanatory leverage when it comes to a broad swathe of seemingly disparate psychological phenomena (e.g., learning, memory, perception, action, emotion, planning, reason, imagination, and conscious experience). In the most positive case, the predictive processing story seems to provide our first glimpse at what a unified (computationally-tractable and neurobiological plausible) account of human psychology might look like. This obviously marks out one reason why such models should be the focus of current empirical and theoretical attention. Another reason, however, is rooted in the potential of such models to advance the current state-of-the-art in machine intelligence and machine learning. Interestingly, the vision of the brain as a hierarchical prediction machine is one that establishes contact with work that goes under the heading of 'deep learning'. Deep learning systems thus often attempt to make use of predictive processing schemes and (increasingly abstract) generative models as a means of supporting the analysis of large data sets. But are such computational systems sufficient (by themselves) to provide a route to general human-level analytic capabilities? I will argue that they are not and that closer attention to a broader range of forces and factors (many of which are not confined to the neural realm) may be required to understand what it is that gives human cognition its distinctive (and largely unique) flavour. The vision that emerges is one of 'homomimetic deep learning systems', systems that situate a hierarchically-organized predictive processing core within a larger nexus of developmental, behavioural, symbolic, technological and social influences. Relative to that vision, I suggest that we should see the Web as a form of 'cognitive ecology', one that is as much involved with the transformation of machine intelligence as it is with the progressive reshaping of our own cognitive capabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent interest in the validation of general circulation models (GCMs) has been devoted to objective methods. A small number of authors have used the direct synoptic identification of phenomena together with a statistical analysis to perform the objective comparison between various datasets. This paper describes a general method for performing the synoptic identification of phenomena that can be used for an objective analysis of atmospheric, or oceanographic, datasets obtained from numerical models and remote sensing. Methods usually associated with image processing have been used to segment the scene and to identify suitable feature points to represent the phenomena of interest. This is performed for each time level. A technique from dynamic scene analysis is then used to link the feature points to form trajectories. The method is fully automatic and should be applicable to a wide range of geophysical fields. An example will be shown of results obtained from this method using data obtained from a run of the Universities Global Atmospheric Modelling Project GCM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A connection is shown to exist between the mesoscale eddy activity around Madagascar and the large-scale interannual variability in the Indian Ocean. We use the combined TOPEX/Poseidon-ERS sea surface height (SSH) data for the period 1993–2003. The SSH-fields in the Mozambique Channel and east of Madagascar exhibit a significant interannual oscillation. This is related to the arrival of large-scale anomalies that propagate westward along 10°–15°S in response to the Indian Ocean dipole (IOD) events. Positive (negative) SSH anomalies associated to a positive (negative) IOD phase induce a shift in the intensity and position of the tropical and subtropical gyres. A weakening (strengthening) results in the intensity of the South Equatorial Current and its branches along east Madagascar. In addition, the flow through the narrows of the Mozambique Channel around 17°S increases (decreases) during periods of a stronger and northward (southward) extension of the subtropical (tropical) gyre. Interaction between the currents in the narrows and southward propagating eddies from the northern Channel leads to interannual variability in the eddy kinetic energy of the central Channel in phase with the one in the SSH-field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hydrologic transport of dissolved organic carbon (DOC) from peat soils may differ to organo-mineral soils in how they responded to changes in flow, because of differences in soil profile and hydrology. In well-drained organo-mineral soils, low flow is through the lower mineral layer where DOC is absorbed and high flow is through the upper organic layer where DOC is produced. DOC concentrations in streams draining organo-mineral soils typically increase with flow. In saturated peat soils, both high and low flows are through an organic layer where DOC is produced. Therefore, DOC in stream water draining peat may not increase in response to changes in flow as there is no switch in flow path between a mineral and organic layer. To verify this, we conducted a high-resolution monitoring study of soil and stream water at an upland peat catchment in northern England. Our data showed a strong positive correlation between DOC concentrations at − 1 and − 5 cm depth and stream water, and weaker correlations between concentrations at − 20 to − 50 cm depth and stream water. Although near surface organic material appears to be the key source of stream water DOC in both peat and organo-mineral soils, we observed a negative correlation between stream flow and DOC concentrations instead of a positive correlation as DOC released from organic layers during low and high flow was diluted by rainfall. The differences in DOC transport processes between peat and organo-mineral soils have different implications for our understanding of long-term changes in DOC exports. While increased rainfall may cause an increase in DOC flux from peat due to an increase in water volume, it may cause a decrease in concentrations. This response is contrary to expected changes in DOC exports from organo-mineral soils, where increase rainfall is likely to result in an increase in flux and concentration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper shows that a wavelet network and a linear term can be advantageously combined for the purpose of non linear system identification. The theoretical foundation of this approach is laid by proving that radial wavelets are orthogonal to linear functions. A constructive procedure for building such nonlinear regression structures, termed linear-wavelet models, is described. For illustration, sim ulation data are used to identify a model for a two-link robotic manipulator. The results show that the introduction of wavelets does improve the prediction ability of a linear model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Basic Network transactions specifies that datagram from source to destination is routed through numerous routers and paths depending on the available free and uncongested paths which results in the transmission route being too long, thus incurring greater delay, jitter, congestion and reduced throughput. One of the major problems of packet switched networks is the cell delay variation or jitter. This cell delay variation is due to the queuing delay depending on the applied loading conditions. The effect of delay, jitter accumulation due to the number of nodes along transmission routes and dropped packets adds further complexity to multimedia traffic because there is no guarantee that each traffic stream will be delivered according to its own jitter constraints therefore there is the need to analyze the effects of jitter. IP routers enable a single path for the transmission of all packets. On the other hand, Multi-Protocol Label Switching (MPLS) allows separation of packet forwarding and routing characteristics to enable packets to use the appropriate routes and also optimize and control the behavior of transmission paths. Thus correcting some of the shortfalls associated with IP routing. Therefore MPLS has been utilized in the analysis for effective transmission through the various networks. This paper analyzes the effect of delay, congestion, interference, jitter and packet loss in the transmission of signals from source to destination. In effect the impact of link failures, repair paths in the various physical topologies namely bus, star, mesh and hybrid topologies are all analyzed based on standard network conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a practical approach to the enhancement of Quality of Service (QoS) routing by means of providing alternative or repair paths in the event of a breakage of a working path. The proposed scheme guarantees that every Protected Node (PN) is connected to a multi-repair path such that no further failure or breakage of single or double repair paths can cause any simultaneous loss of connectivity between an ingress node and an egress node. Links to be protected in an MPLS network are predefined and a Label Switched path (LSP) request involves the establishment of a working path. The use of multi-protection paths permits the formation of numerous protection paths allowing greater flexibility. Our analysis examined several methods including single, double and multi-repair routes and the prioritization of signals along the protected paths to improve the Quality of Service (QoS), throughput, reduce the cost of the protection path placement, delay, congestion and collision. Results obtained indicated that creating multi-repair paths and prioritizing packets reduces delay and increases throughput in which case the delays at the ingress/egress LSPs were low compared to when the signals had not been classified. Therefore the proposed scheme provided a means to improve the QoS in path restoration in MPLS using available network resources. Prioritizing the packets in the data plane has revealed that the amount of traffic transmitted using a medium and low priority Label Switch Paths (LSPs) does not have any impact on the explicit rate of the high priority LSP in which case the problem of a knock-on effect is eliminated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

 In the last decade, a vast number of land surface schemes has been designed for use in global climate models, atmospheric weather prediction, mesoscale numerical models, ecological models, and models of global changes. Since land surface schemes are designed for different purposes they have various levels of complexity in the treatment of bare soil processes, vegetation, and soil water movement. This paper is a contribution to a little group of papers dealing with intercomparison of differently designed and oriented land surface schemes. For that purpose we have chosen three schemes for classification: i) global climate models, BATS (Dickinson et al., 1986; Dickinson et al., 1992); ii) mesoscale and ecological models, LEAF (Lee, 1992) and iii) mesoscale models, LAPS (Mihailović, 1996; Mihailović and Kallos, 1997; Mihailović et al., 1999) according to the Shao et al. (1995) classification. These schemes were compared using surface fluxes and leaf temperature outputs obtained by time integrations of data sets derived from the micrometeorological measurements above a maize field at an experimental site in De Sinderhoeve (The Netherlands) for 18 August, 8 September, and 4 October 1988. Finally, comparison of the schemes was supported applying a simple statistical analysis on the surface flux outputs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An analysis of observational data in the Barents Sea along a meridian at 33°30' E between 70°30' and 72°30' N has reported a negative correlation between El Niño/La Niña Southern Oscillation (ENSO) events and water temperature in the top 200 m: the temperature drops about 0.5 °C during warm ENSO events while during cold ENSO events the top 200 m layer of the Barents Sea is warmer. Results from 1 and 1/4-degree global NEMO models show a similar response for the whole Barents Sea. During the strong warm ENSO event in 1997–1998 an anomalous anticyclonic atmospheric circulation over the Barents Sea enhances heat loses, as well as substantially influencing the Barents Sea inflow from the North Atlantic, via changes in ocean currents. Under normal conditions along the Scandinavian peninsula there is a warm current entering the Barents Sea from the North Atlantic, however after the 1997–1998 event this current is weakened. During 1997–1998 the model annual mean temperature in the Barents Sea is decreased by about 0.8 °C, also resulting in a higher sea ice volume. In contrast during the cold ENSO events in 1999–2000 and 2007–2008, the model shows a lower sea ice volume, and higher annual mean temperatures in the upper layer of the Barents Sea of about 0.7 °C. An analysis of model data shows that the strength of the Atlantic inflow in the Barents Sea is the main cause of heat content variability, and is forced by changing pressure and winds in the North Atlantic. However, surface heat-exchange with the atmosphere provides the means by which the Barents sea heat budget relaxes to normal in the subsequent year after the ENSO events.