792 resultados para small-world network


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new model for the Sun's global photospheric magnetic field during a deep minimum of activity, in which no active regions emerge. The emergence and subsequent evolution of small- scale magnetic features across the full solar surface is simulated, subject to the influence of a global supergranular flow pattern. Visually, the resulting simulated magnetograms reproduce the typical structure and scale observed in quiet Sun magnetograms. Quantitatively, the simulation quickly reaches a steady state, resulting in a mean field and flux distribution that are in good agreement with those determined from observations. A potential coronal magnetic field is extrapolated from the simulated full Sun magnetograms, to consider the implications of such a quiet photospheric magnetic field on the corona and inner heliosphere. The bulk of the coronal magnetic field closes very low down, in short connections between small-scale features in the simulated magnetic network. Just 0.1% of the photospheric magnetic flux is found to be open at 2:5 Rʘ, around 10 - 100 times less than that determined for typical HMI synoptic map observations. If such conditions were to exist on the Sun, this would lead to a significantly weaker interplanetary magnetic field than is presently observed, and hence a much higher cosmic ray flux at Earth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract The potential impacts of climate change and environmental variability are already evident in most parts of the world, which is witnessing increasing temperature rates and prolonged flood or drought conditions that affect agriculture activities and nature-dependent livelihoods. This study was conducted in Mwanga District in the Kilimanjaro region of Tanzania to assess the nature and impacts of climate change and environmental variability on agriculture-dependent livelihoods and the adaptation strategies adopted by small-scale rural farmers. To attain its objective, the study employed a mixed methods approach in which both qualitative and quantitative techniques were used. The study shows that farmers are highly aware of their local environment and are conscious of the ways environmental changes affect their livelihoods. Farmers perceived that changes in climatic variables such as rainfall and temperature had occurred in their area over the period of three decades, and associated these changes with climate change and environmental variability. Farmers’ perceptions were confirmed by the evidence from rainfall and temperature data obtained from local and national weather stations, which showed that temperature and rainfall in the study area had become more variable over the past three decades. Farmers’ knowledge and perceptions of climate change vary depending on the location, age and gender of the respondents. The findings show that the farmers have limited understanding of the causes of climatic conditions and environmental variability, as some respondents associated climate change and environmental variability with social, cultural and religious factors. This study suggests that, despite the changing climatic conditions and environmental variability, farmers have developed and implemented a number of agriculture adaptation strategies that enable them to reduce their vulnerability to the changing conditions. The findings show that agriculture adaptation strategies employ both planned and autonomous adaptation strategies. However, the study shows that increasing drought conditions, rainfall variability, declining soil fertility and use of cheap farming technology are among the challenges that limit effective implementation of agriculture adaptation strategies. This study recommends further research on the varieties of drought-resilient crops, the development of small-scale irrigation schemes to reduce dependence on rain-fed agriculture, and the improvement of crop production in a given plot of land. In respect of the development of adaptation strategies, the study recommends the involvement of the local farmers and consideration of their knowledge and experience in the farming activities as well as the conditions of their local environment. Thus, the findings of this study may be helpful at various levels of decision making with regard to the development of climate change and environmental variability policies and strategies towards reducing farmers’ vulnerability to current and expected future changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Social networks are a recent phenomenon of communication, with a high prevalence of young users. This concept serves as a motto for a multidisciplinary project, which aims to create a simple communication network, using light as the transmission medium. Mixed team, composed by students from secondary and higher education schools, are partners on the development of an optical transceiver. A LED lamp array and a small photodiode are the optical transmitter and receiver, respectively. Using several transceivers aligned with each other, this con guration creates a ring communication network, enabling the exchange of messages between users. Through this project, some concepts addressed in physics classes from secondary schools (e.g. photoelectric phenomena and the properties of light) are experimentally veri ed and used to communicate, in a classroom or a laboratory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Internet has grown in size at rapid rates since BGP records began, and continues to do so. This has raised concerns about the scalability of the current BGP routing system, as the routing state at each router in a shortest-path routing protocol will grow at a supra-linearly rate as the network grows. The concerns are that the memory capacity of routers will not be able to keep up with demands, and that the growth of the Internet will become ever more cramped as more and more of the world seeks the benefits of being connected. Compact routing schemes, where the routing state grows only sub-linearly relative to the growth of the network, could solve this problem and ensure that router memory would not be a bottleneck to Internet growth. These schemes trade away shortest-path routing for scalable memory state, by allowing some paths to have a certain amount of bounded “stretch”. The most promising such scheme is Cowen Routing, which can provide scalable, compact routing state for Internet routing, while still providing shortest-path routing to nearly all other nodes, with only slightly stretched paths to a very small subset of the network. Currently, there is no fully distributed form of Cowen Routing that would be practical for the Internet. This dissertation describes a fully distributed and compact protocol for Cowen routing, using the k-core graph decomposition. Previous compact routing work showed the k-core graph decomposition is useful for Cowen Routing on the Internet, but no distributed form existed. This dissertation gives a distributed k-core algorithm optimised to be efficient on dynamic graphs, along with with proofs of its correctness. The performance and efficiency of this distributed k-core algorithm is evaluated on large, Internet AS graphs, with excellent results. This dissertation then goes on to describe a fully distributed and compact Cowen Routing protocol. This protocol being comprised of a landmark selection process for Cowen Routing using the k-core algorithm, with mechanisms to ensure compact state at all times, including at bootstrap; a local cluster routing process, with mechanisms for policy application and control of cluster sizes, ensuring again that state can remain compact at all times; and a landmark routing process is described with a prioritisation mechanism for announcements that ensures compact state at all times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a case for using Global Community Innovation Platforms (GCIPs), an approach to improve innovation and knowledge exchange in international scientific communities through a common and open online infrastructure. We highlight the value of GCIPs by focusing on recent efforts targeting the ecological sciences, where GCIPs are of high relevance given the urgent need for interdisciplinary, geographical, and cross-sector collaboration to cope with growing challenges to the environment as well as the scientific community itself. Amidst the emergence of new international institutions, organizations, and meetings, GCIPs provide a stable international infrastructure for rapid and long-term coordination that can be accessed by any individual. This accessibility can be especially important for researchers early in their careers. Recent examples of early-career GCIPs complement an array of existing options for early-career scientists to improve skill sets, increase academic and social impact, and broaden career opportunities. We provide a number of examples of existing early-career initiatives that incorporate elements from the GCIPs approach, and highlight an in-depth case study from the ecological sciences: the International Network of Next-Generation Ecologists (INNGE), initiated in 2010 with support from the International Association for Ecology and 20 member institutions from six continents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O vírus da gripe é uma das maiores causas de morbilidade e mortalidade em todo o mundo, afetando um elevado número de indivíduos em cada ano. Em Portugal a vigilância epidemiológica da gripe é assegurada pelo Programa Nacional de Vigilância da Gripe (PNVG), através da integração da informação das componentes clínica e virológica, gerando informação detalhada relativamente à atividade gripal. A componente clínica é suportada pela Rede Médicos-Sentinela e tem um papel especialmente relevante por possibilitar o cálculo de taxas de incidência permitindo descrever a intensidade e evolução da epidemia de gripe. A componente virológica tem por base o diagnóstico laboratorial do vírus da gripe e tem como objetivos a deteção e caraterização dos vírus da gripe em circulação. Para o estudo mais completo da etiologia da síndrome gripal foi efectuado o diagnóstico diferencial de outros vírus respiratórios: vírus sincicial respiratório tipo A (RSV A) e B (RSV B), o rhinovírus humano (hRV), o vírus parainfluenza humano tipo 1 (PIV1), 2 (PIV2) e 3 (PIV3), o coronavírus humano (hCoV), o adenovírus (AdV) e o metapneumovirus humano (hMPV). Desde 2009 a vigilância da gripe conta também com a Rede Portuguesa de Laboratórios para o Diagnóstico da Gripe que atualmente é constituída por 15 hospitais onde se realiza o diagnóstico laboratorial da gripe. A informação obtida nesta Rede Laboratorial adiciona ao PNVG dados relativos a casos de doença respiratória mais severa com necessidade de internamento. Em 2011/2012, foi lançado um estudo piloto para vigiar os casos graves de gripe admitidos em Unidades de Cuidados Intensivos (UCI) que deu origem à atual Rede de vigilância da gripe em UCI constituída em 2015/2016 por 31 UCI (324 camas). Esta componente tem como objetivo a monitorização de novos casos de gripe confirmados laboratorialmente e admitidos em UCI, permitindo a avaliação da gravidade da doença associada à infeção pelo vírus da gripe. O Sistema da Vigilância Diária da Mortalidade constitui uma componente do PNVG que permite monitorizar a mortalidade semanal por “todas as causas” durante a época de gripe. É um sistema de vigilância epidemiológica que pretende detetar e estimar de forma rápida os impactos de eventos ambientais ou epidémicos relacionados com excessos de mortalidade. A notificação de casos de Síndrome Gripal (SG) e a colheita de amostras biológicas foi realizada em diferentes redes participantes do PNVG: Rede de Médicos-Sentinela, Rede de Serviços de Urgência/Obstetrícia, médicos do Projeto EuroEVA, Rede Portuguesa de Laboratórios para o Diagnóstico da Gripe e Rede vigilância da gripe em UCI. Na época de vigilância da gripe de 2015/2016 foram notificados 1.273 casos de SG, 87% dos quais acompanhados de um exsudado da nasofaringe para diagnóstico laboratorial. No inverno de 2015/2016 observou-se uma atividade gripal de baixa intensidade. O período epidémico ocorreu entre a semana 53/2015 e a semana 8/2016 e o valor mais elevado da taxa de incidência semanal de SG (72,0/100000) foi observado na semana 53/2015. De acordo com os casos notificados à Rede Médicos-Sentinela, o grupo etário dos 15 aos 64 anos foi o que apresentou uma incidência cumulativa mais elevada. O vírus da gripe foi detetado em 41,0% dos exsudados da nasofaringe recebidos tendo sido detetados outros vírus respiratórios em 24% destes. O vírus da gripe A(H1)pdm09 foi o predominantemente detetado em 90,4% dos casos de gripe. Foram também detetados outros vírus da gripe, o vírus B - linhagem Victoria (8%), o vírus A(H3) (1,3%) e o vírus B- linhagem Yamagata (0,5%). A análise antigénica dos vírus da gripe A(H1)pdm09 mostrou a sua semelhança com a estirpe vacinal 2015/2016 (A/California/7/2009), a maioria dos vírus pertencem ao novo grupo genético 6B.1, que foi o predominantemente detetado em circulação na Europa. Os vírus do tipo B apesar de detetados em número bastante mais reduzido comparativamente com o subtipo A(H1)pdm09, foram na sua maioria da linhagem Victoria que antigenicamente se distinguem da estirpe vacinal de 2015/2016 (B/Phuket/3073/2013). Esta situação foi igualmente verificada nos restantes países da Europa, Estados Unidos da América e Canadá. Os vírus do subtipo A(H3) assemelham-se antigenicamente à estirpe selecionada para a vacina de 2016/2017 (A/Hong Kong/4801/2014). Geneticamente a maioria dos vírus caraterizados pertencem ao grupo 3C.2a, e são semelhantes à estirpe vacinal para a época de 2016/2017. A avaliação da resistência aos antivirais inibidores da neuraminidase, não revelou a circulação de estirpes com diminuição da suscetibilidade aos inibidores da neuraminidase (oseltamivir e zanamivir). A situação verificada em Portugal é semelhante à observada a nível europeu. A percentagem mais elevada de casos de gripe foi verificada nos indivíduos com idade inferior a 45 anos. A febre, as cefaleias, o mal-estar geral, as mialgias, a tosse e os calafrios mostraram apresentar uma forte associação à confirmação laboratorial de um caso de gripe. Foi nos doentes com imunodeficiência congénita ou adquirida que a proporção de casos de gripe foi mais elevada, seguidos dos doentes com diabetes e obesidade. A percentagem total de casos de gripe em mulheres grávidas foi semelhante à observada nas mulheres em idade fértil não grávidas. No entanto, o vírus da gripe do tipo A(H1)pdm09 foi detetado em maior proporção nas mulheres grávidas quando comparado as mulheres não grávidas. A vacina como a principal forma de prevenção da gripe é especialmente recomendada em indivíduos com idade igual ou superior a 65 anos, doentes crónicos e imunodeprimidos, grávidas e profissionais de saúde. A vacinação antigripal foi referida em 13% dos casos notificados. A deteção do vírus da gripe ocorreu em 25% dos casos vacinados e sujeitos a diagnóstico laboratorial estando essencialmente associados ao vírus da gripe A(H1)pdm09, o predominante na época de 2015/2016. Esta situação foi mais frequentemente verificada em indivíduos com idade compreendida entre os 15 e 45 anos. A confirmação de gripe em indivíduos vacinados poderá estar relacionada com uma moderada efetividade da vacina antigripal na população em geral. A informação relativa à terapêutica antiviral foi indicada em 67% casos de SG notificados, proporção superior ao verificado em anos anteriores. Os antivirais foram prescritos a um número reduzido de doentes (9,0%) dos quais 45.0% referiam pelo menos a presença de uma doença crónica ou gravidez. O antiviral mais prescrito foi o oseltamivir. A pesquisa de outros vírus respiratórios nos casos de SG negativos para o vírus da gripe, veio revelar a circulação e o envolvimento de outros agentes virais respiratórios em casos de SG. Os vírus respiratórios foram detetados durante todo o período de vigilância da gripe, entre a semana 40/2015 e a semana 20/2016. O hRV, o hCoV e o RSV foram os agentes mais frequentemente detetados, para além do vírus da gripe, estando o RSV essencialmente associado a crianças com idade inferior a 4 anos de idade e o hRV e o hCoV aos adultos e população mais idosa (≥ 65 anos). A Rede Portuguesa de Laboratórios para o Diagnóstico da Gripe, efetuou o diagnóstico da gripe em 7443 casos de infeção respiratória sendo o vírus da gripe detetado em 1458 destes casos. Em 71% dos casos de gripe foi detetado o vírus da gripe A(H1)pdm09. Os vírus da gripe do tipo A(H3) foram detetados esporadicamente e em número muito reduzido (2%), e em 11% o vírus da gripe A (não subtipado). O vírus da gripe do tipo B foi detetado em 16% dos casos. A frequência de cada tipo e subtipo do vírus da gripe identificados na Rede Hospitalar assemelha-se ao observado nos cuidados de saúde primários (Rede Médicos-Sentinela e Serviços de Urgência). Foi nos indivíduos adultos, entre os 45-64 anos, que o vírus A(H1)pdm09 representou uma maior proporção dos casos de gripe incluindo igualmente a maior proporção de doentes que necessitaram de internamento hospitalar em unidades de cuidados intensivos. O vírus da gripe do tipo B esteve associado a casos de gripe confirmados nas crianças entre os 5 e 14 anos. Outros vírus respiratórios foram igualmente detetados sendo o RSV e os picornavírus (hRV, hEV e picornavírus) os mais frequentes e em co circulação com o vírus da gripe. Durante a época de vigilância da gripe, 2015/2016, não se observaram excessos de mortalidade semanais. Nas UCI verificou-se uma franca dominância do vírus da gripe A(H1)pdm09 (90%) e a circulação simultânea do vírus da gripe B (3%). A taxa de admissão em UCI oscilou entre 5,8% e 4,7% entre as semanas 53 e 12 tendo o valor máximo sido registado na semana 8 de 2016 (8,1%). Cerca de metade dos doentes tinha entre 45 e 64 anos. Os mais idosos (65+ anos) foram apenas 20% dos casos, o que não será de estranhar, considerando que o vírus da gripe A(H1)pdm09 circulou como vírus dominante. Aproximadamente 70% dos doentes tinham doença crónica subjacente, tendo a obesidade sido a mais frequente (37%). Comparativamente com a pandemia, em que circulou também o A(H1)pdm09, a obesidade, em 2015/2016, foi cerca de 4 vezes mais frequente (9,8%). Apenas 8% dos doentes tinha feito a vacina contra a gripe sazonal, apesar de mais de 70% ter doença crónica subjacente e de haver recomendações da DGS nesse sentido. A taxa de letalidade foi estimada em 29,3%, mais elevada do que na época anterior (23,7%). Cerca de 80% dos óbitos ocorreram em indivíduos com doença crónica subjacente que poderá ter agravado o quadro e contribuído para o óbito. Salienta-se a ausência de dados históricos publicados sobre letalidade em UCI, para comparação. Note-se que esta estimativa se refere a óbitos ocorridos apenas durante a hospitalização na UCI e que poderão ter ocorrido mais óbitos após a alta da UCI para outros serviços/enfermarias. Este sistema de vigilância da gripe sazonal em UCI poderá ser aperfeiçoado nas próximas épocas reduzindo a subnotificação e melhorando o preenchimento dos campos necessários ao estudo da doença. A época de vigilância da gripe 2015/2016 foi em muitas caraterísticas comparável ao descrito na maioria dos países europeus. A situação em Portugal destacou-se pela baixa intensidade da atividade gripal, pelo predomínio do vírus da gripe do subtipo A(H1)pdm09 acompanhada pela deteção de vírus do tipo B (linhagem Victoria) essencialmente no final da época gripal. A mortalidade por todas as causas durante a epidemia da gripe manteve-se dentro do esperado, não tendo sido observados excessos de mortalidade. Os vírus da gripe do subtipo predominante na época 2015/2016, A(H1)pdm09, revelaram-se antigénicamente semelhantes à estirpe vacinal. Os vírus da gripe do tipo B detetados distinguem-se da estirpe vacinal de 2015/2016. Este facto conduziu à atualização da composição da vacina antigripal para a época 2016/2017. A monitorização contínua da epidemia da gripe a nível nacional e mundial permite a cada inverno avaliar o impacto da gripe na saúde da população, monitorizar a evolução dos vírus da gripe e atuar de forma a prevenir e implementar medidas eficazes de tratamento da doença, especialmente quando esta se apresenta acompanhada de complicações graves.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The trees, hedgerows and woods are current configuration of the tree network in several ecological regions of the world. In Trás–os–Montes region, Northeast of Portugal, they are a traditional component of Terra fria landscape and they could be seen in several forms: scatter trees, fencerows, small woodlots, riparian buffer strips, among others. The extensive livestock systems in this region are based on a set of circuits across the landscape. In this practice, flocks interacts with these structures using them for different functions inducing an influence on the itineraries. Our purpose will be focused on the woody features of landscape regarding their configurations, abundance and spacial distribution; in order to examine how the grazing systems depends on the currency of these formations; particularly how species flocks behaviors are related on. Depending on spatial data, The investigation attain to compare the tree network within the agriculture matrix, to the grazed territory crossed by flocks. From the other side, the importance of spatial data on interpreting the issue by suggesting different parameter that may influence the circuits. The recognition of the pressure exerciced by the occurence of the woody structures on the grazed circuits is possible. We believe that the role of these woody structures features in supporting the tradicional silvopastoral systems has been sufficiently strong for change their distribution pattern.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current trends in broadband mobile networks are addressed towards the placement of different capabilities at the edge of the mobile network in a centralised way. On one hand, the split of the eNB between baseband processing units and remote radio headers makes it possible to process some of the protocols in centralised premises, likely with virtualised resources. On the other hand, mobile edge computing makes use of processing and storage capabilities close to the air interface in order to deploy optimised services with minimum delay. The confluence of both trends is a hot topic in the definition of future 5G networks. The full centralisation of both technologies in cloud data centres imposes stringent requirements to the fronthaul connections in terms of throughput and latency. Therefore, all those cells with limited network access would not be able to offer these types of services. This paper proposes a solution for these cases, based on the placement of processing and storage capabilities close to the remote units, which is especially well suited for the deployment of clusters of small cells. The proposed cloud-enabled small cells include a highly efficient microserver with a limited set of virtualised resources offered to the cluster of small cells. As a result, a light data centre is created and commonly used for deploying centralised eNB and mobile edge computing functionalities. The paper covers the proposed architecture, with special focus on the integration of both aspects, and possible scenarios of application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part 21: Mobility and Logistics

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This document is summarizing a major part of the work performed by the FP7-JERICO consortium, including 27 partner institutions, during 4 years (2011-2015). Its objective is to propose a strategy for the European coastal observation and monitoring. To do so we give an overview of the main achievements of the FP7-JERICO project. From this overview, gaps are analysed to draw some recommendations for the future. Overview, gaps and recommendation are addressed at both Hardware and Software levels of the JERICO Research Infrastructure. The main part of the document is built upon this analysis to outcome a general strategy for the future, giving priorities to be targeted and some possible funding mechanisms, but also upon discussions held in dedicated JERICO strategy workshops. This document was initiated in 2014 by the coordination team but considering the fact that an overview of the entire project and its achievement were needed to feed this strategy deliverable it couldn’t ended before the end of FP7-JERICO, April 2015. The preparation of the JERICO-NEXT proposal in summer 2014 to answer an H2020 call for proposals pushed the consortium ahead, fed deep thoughts about this strategy but the intention was to not propose a strategy only bounded by the JERICO-NEXT answer. Authors are conscious that writing JERICO-NEXT is even drawing a bias in the thoughts and they tried to be opened. Nevertheless, comments are always welcome to go farther ahead. Structure of the document The Chapter 3 introduces the need of sustained coastal observatories, from different point of view including a short description of the FP7-JERICO project. In Chapter 4, an analysis of the JERICO coastal observatory Hardware (platforms and sensors) in terms of Status at the end of JERICO, identified gaps and recommendations for further development is provided region by region. The main challenges that remain to be overcome is also summarized. Chapter 5 is dedicated the JERICO infrastructure Software (calibration, operation, quality assessment, data management) and the progress made through JERICO on harmonization of procedures and definition of best practices. Chapter 6 provides elements of a strategy towards sustainable and integrated coastal observations for Europe, drawing a roadmap for cost-effective scientific-based consolidation of the present infrastructure while maximizing the potential arising from JERICO in terms of innovation, wealth-creation, and business development. After reading the chapter 3, for who doesn’t know JERICO, any chapter can be read independently. More details are available in the JERICO final reports and its intermediate reports; all are available on the JERICO web site (www.jerico-FP7.eu) as well as any deliverable. Each chapter will list referring JERICO documents. A small bibliographic list is available at the end of this deliverable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Any other technology has never affected daily life at this level and witnessed as speedy adaptation as the mobile phone. At the same time, mobile media has developed to be a serious marketing tool for all kinds of businesses, and the industry has grown explosively in recent years. The objective of this thesis is to inspect the mobile marketing process of an international event. This thesis is a qualitative case study. The chosen case for this thesis is the mobile marketing process of Falun2015 FIS Nordic World Ski Championships due to researcher’s interest on the topic and contacts to the people around the event. The empirical findings were acquired by conducting two interviews with three experts from the case organisation and its partner organisation. The interviews were performed as semi-structured interviews utilising the themes arising from the chosen theoretical framework. The framework distinguished six phases in the process: (i) campaign initiation, (ii) campaign design, (iii) campaign creation, (iv) permission management, (v) delivery, and (vi) evaluation and analysis. Phases one and five were not examined in this thesis because campaign initiation was not purely seen as part of the campaign implementation, and investigating phase five would have required a very technical viewpoint to the study. In addition to the interviews, some pre-established documents were exploited as a supporting data. The empirical findings of this thesis mainly follow the theoretical framework utilised. However, some modifications to the model could be made mainly related to the order of different phases. In the revised model, the actions are categorised depending on the time they should be conducted, i.e. before, during or after the event. Regardless of the categorisation, the phases can be in different order and overlapping. In addition, the business network was highly emphasised by the empirical findings and is thus added to the modified model. Five managerial recommendations can be concluded from the empirical findings of this thesis: (i) the importance of a business network should be highly valued in a mobile marketing process; (ii) clear goals should be defined for mobile marketing actions in order to make sure that everyone involved is aware them; (iii) interactivity should be perceived as part of a mobile marketing communication; (iv) enough time should be allowed for the development of a mobile marketing process in order to exploit all the potential it can offer; and (v) attention should be paid to measuring and analysing matters that are of relevance