989 resultados para Sink nodes


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In wireless ad hoc networks, nodes communicate with far off destinations using intermediate nodes as relays. Since wireless nodes are energy constrained, it may not be in the best interest of a node to always accept relay requests. On the other hand, if all nodes decide not to expend energy in relaying, then network throughput will drop dramatically. Both these extreme scenarios (complete cooperation and complete noncooperation) are inimical to the interests of a user. In this paper, we address the issue of user cooperation in ad hoc networks. We assume that nodes are rational, i.e., their actions are strictly determined by self interest, and that each node is associated with a minimum lifetime constraint. Given these lifetime constraints and the assumption of rational behavior, we are able to determine the optimal share of service that each node should receive. We define this to be the rational Pareto optimal operating point. We then propose a distributed and scalable acceptance algorithm called Generous TIT-FOR-TAT (GTFT). The acceptance algorithm is used by the nodes to decide whether to accept or reject a relay request. We show that GTFT results in a Nash equilibrium and prove that the system converges to the rational and optimal operating point.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An ad hoc network is composed of mobile nodes without any infrastructure. Recent trends in applications of mobile ad hoc networks rely on increased group oriented services. Hence multicast support is critical for ad hoc networks. We also need to provide service differentiation schemes for different group of users. An efficient application layer multicast (APPMULTICAST) solution suitable for low mobility applications in MANET environment has been proposed in [10]. In this paper, we present an improved application layer multicast solution suitable for medium mobility applications in MANET environment. We define multicast groups with low priority and high priority and incorporate a two level service differentiation scheme. We use network layer support to build the overlay topology closer to the actual network topology. We try to maximize Packet Delivery Ratio. Through simulations we show that the control overhead for our algorithm is within acceptable limit and it achieves acceptable Packet Delivery Ratio for medium mobility applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A large fraction of an XML document typically consists of text data. The XPath query language allows text search via the equal, contains, and starts-with predicates. Such predicates can be efficiently implemented using a compressed self-index of the document's text nodes. Most queries, however, contain some parts querying the text of the document, plus some parts querying the tree structure. It is therefore a challenge to choose an appropriate evaluation order for a given query, which optimally leverages the execution speeds of the text and tree indexes. Here the SXSI system is introduced. It stores the tree structure of an XML document using a bit array of opening and closing brackets plus a sequence of labels, and stores the text nodes of the document using a global compressed self-index. On top of these indexes sits an XPath query engine that is based on tree automata. The engine uses fast counting queries of the text index in order to dynamically determine whether to evaluate top-down or bottom-up with respect to the tree structure. The resulting system has several advantages over existing systems: (1) on pure tree queries (without text search) such as the XPathMark queries, the SXSI system performs on par or better than the fastest known systems MonetDB and Qizx, (2) on queries that use text search, SXSI outperforms the existing systems by 1-3 orders of magnitude (depending on the size of the result set), and (3) with respect to memory consumption, SXSI outperforms all other systems for counting-only queries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bandwidth allocation for multimedia applications in case of network congestion and failure poses technical challenges due to bursty and delay sensitive nature of the applications. The growth of multimedia services on Internet and the development of agent technology have made us to investigate new techniques for resolving the bandwidth issues in multimedia communications. Agent technology is emerging as a flexible promising solution for network resource management and QoS (Quality of Service) control in a distributed environment. In this paper, we propose an adaptive bandwidth allocation scheme for multimedia applications by deploying the static and mobile agents. It is a run-time allocation scheme that functions at the network nodes. This technique adaptively finds an alternate patchup route for every congested/failed link and reallocates the bandwidth for the affected multimedia applications. The designed method has been tested (analytical and simulation)with various network sizes and conditions. The results are presented to assess the performance and effectiveness of the approach. This work also demonstrates some of the benefits of the agent based schemes in providing flexibility, adaptability, software reusability, and maintainability. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Interaction between forests and the atmosphere occurs by radiative and turbulent transport. The fluxes of energy and mass between surface and the atmosphere directly influence the properties of the lower atmosphere and in longer time scales the global climate. Boreal forest ecosystems are central in the global climate system, and its responses to human activities, because they are significant sources and sinks of greenhouse gases and of aerosol particles. The aim of the present work was to improve our understanding on the existing interplay between biologically active canopy, microenvironment and turbulent flow and quantify. In specific, the aim was to quantify the contribution of different canopy layers to whole forest fluxes. For this purpose, long-term micrometeorological and ecological measurements made in a Scots pine (Pinus sylvestris) forest at SMEAR II research station in Southern Finland were used. The properties of turbulent flow are strongly modified by the interaction between the canopy elements: momentum is efficiently absorbed in the upper layers of the canopy, mean wind speed and turbulence intensities decrease rapidly towards the forest floor and power spectra is modulated by spectral short-cut . In the relative open forest, diabatic stability above the canopy explained much of the changes in velocity statistics within the canopy except in strongly stable stratification. Large eddies, ranging from tens to hundred meters in size, were responsible for the major fraction of turbulent transport between a forest and the atmosphere. Because of this, the eddy-covariance (EC) method proved to be successful for measuring energy and mass exchange inside a forest canopy with exception of strongly stable conditions. Vertical variations of within canopy microclimate, light attenuation in particular, affect strongly the assimilation and transpiration rates. According to model simulations, assimilation rate decreases with height more rapidly than stomatal conductance (gs) and transpiration and, consequently, the vertical source-sink distributions for carbon dioxide (CO2) and water vapor (H2O) diverge. Upscaling from a shoot scale to canopy scale was found to be sensitive to chosen stomatal control description. The upscaled canopy level CO2 fluxes can vary as much as 15 % and H2O fluxes 30 % even if the gs models are calibrated against same leaf-level dataset. A pine forest has distinct overstory and understory layers, which both contribute significantly to canopy scale fluxes. The forest floor vegetation and soil accounted between 18 and 25 % of evapotranspiration and between 10 and 20 % of sensible heat exchange. Forest floor was also an important deposition surface for aerosol particles; between 10 and 35 % of dry deposition of particles within size range 10 30 nm occurred there. Because of the northern latitudes, seasonal cycle of climatic factors strongly influence the surface fluxes. Besides the seasonal constraints, partitioning of available energy to sensible and latent heat depends, through stomatal control, on the physiological state of the vegetation. In spring, available energy is consumed mainly as sensible heat and latent heat flux peaked about two months later, in July August. On the other hand, annual evapotranspiration remains rather stable over range of environmental conditions and thus any increase of accumulated radiation affects primarily the sensible heat exchange. Finally, autumn temperature had strong effect on ecosystem respiration but its influence on photosynthetic CO2 uptake was restricted by low radiation levels. Therefore, the projected autumn warming in the coming decades will presumably reduce the positive effects of earlier spring recovery in terms of carbon uptake potential of boreal forests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Agriculture’s contribution to climate change is controversial as it is a significant source of greenhouse gases but also a sink of carbon. Hence its economic and technological potential to mitigate climate change have been argued to be noteworthy. However, social profitability of emission mitigation is a result from factors among emission reductions such as surface water quality impact or profit from production. Consequently, to value comprehensive results of agricultural climate emission mitigation practices, these co-effects to environment and economics should be taken into account. The objective of this thesis was to develop an integrated economic and ecological model to analyse the social welfare of crop cultivation in Finland on distinctive cultivation technologies, conventional tillage and conservation tillage (no-till). Further, we ask whether it would be privately or socially profitable to allocate some of barley cultivation for alternative land use, such as green set-aside or afforestation, when production costs, GHG’s and water quality impacts are taken into account. In the theoretical framework we depict the optimal input use and land allocation choices in terms of environmental impacts and profit from production and derive the optimal tax and payment policies for climate and water quality friendly land allocation. The empirical application of the model uses Finnish data about production cost and profit structure and environmental impacts. According to our results, given emission mitigation practices are not self-evidently beneficial for farmers or society. On the contrary, in some cases alternative land allocation could even reduce social welfare, profiting conventional crop cultivation. This is the case regarding mineral soils such as clay and silt soils. On organic agricultural soils, climate mitigation practices, in this case afforestation and green fallow give more promising results, decreasing climate emissions and nutrient runoff to water systems. No-till technology does not seem to profit climate mitigation although it does decrease other environmental impacts. Nevertheless, the data behind climate emission mitigation practices impact to production and climate is limited and partly contradictory. More specific experiment studies on interaction of emission mitigation practices and environment would be needed. Further study would be important. Particularly area specific production and environmental factors and also food security and safety and socio-economic impacts should be taken into account.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Query incentive networks capture the role of incentives in extracting information from decentralized information networks such as a social network. Several game theoretic tilt:Kids of query incentive networks have been proposed in the literature to study and characterize the dependence, of the monetary reward required to extract the answer for a query, on various factors such as the structure of the network, the level of difficulty of the query, and the required success probability.None of the existing models, however, captures the practical andimportant factor of quality of answers. In this paper, we develop a complete mechanism design based framework to incorporate the quality of answers, in the monetization of query incentive networks. First, we extend the model of Kleinberg and Raghavan [2] to allow the nodes to modulate the incentive on the basis of the quality of the answer they receive. For this qualify conscious model. we show are existence of a unique Nash equilibrium and study the impact of quality of answers on the growth rate of the initial reward, with respect to the branching factor of the network. Next, we present two mechanisms; the direct comparison mechanism and the peer prediction mechanism, for truthful elicitation of quality from the agents. These mechanisms are based on scoring rules and cover different; scenarios which may arise in query incentive networks. We show that the proposed quality elicitation mechanisms are incentive compatible and ex-ante budget balanced. We also derive conditions under which ex-post budget balance can beachieved by these mechanisms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Atmospheric particles affect the radiation balance of the Earth and thus the climate. New particle formation from nucleation has been observed in diverse atmospheric conditions but the actual formation path is still unknown. The prevailing conditions can be exploited to evaluate proposed formation mechanisms. This study aims to improve our understanding of new particle formation from the view of atmospheric conditions. The role of atmospheric conditions on particle formation was studied by atmospheric measurements, theoretical model simulations and simulations based on observations. Two separate column models were further developed for aerosol and chemical simulations. Model simulations allowed us to expand the study from local conditions to varying conditions in the atmospheric boundary layer, while the long-term measurements described especially characteristic mean conditions associated with new particle formation. The observations show statistically significant difference in meteorological and back-ground aerosol conditions between observed event and non-event days. New particle formation above boreal forest is associated with strong convective activity, low humidity and low condensation sink. The probability of a particle formation event is predicted by an equation formulated for upper boundary layer conditions. The model simulations call into question if kinetic sulphuric acid induced nucleation is the primary particle formation mechanism in the presence of organic vapours. Simultaneously the simulations show that ignoring spatial and temporal variation in new particle formation studies may lead to faulty conclusions. On the other hand, the theoretical simulations indicate that short-scale variations in temperature and humidity unlikely have a significant effect on mean binary water sulphuric acid nucleation rate. The study emphasizes the significance of mixing and fluxes in particle formation studies, especially in the atmospheric boundary layer. The further developed models allow extensive aerosol physical and chemical studies in the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The complex web of interactions between the host immune system and the pathogen determines the outcome of any infection. A computational model of this interaction network, which encodes complex interplay among host and bacterial components, forms a useful basis for improving the understanding of pathogenesis, in filling knowledge gaps and consequently to identify strategies to counter the disease. We have built an extensive model of the Mycobacterium tuberculosis host-pathogen interactome, consisting of 75 nodes corresponding to host and pathogen molecules, cells, cellular states or processes. Vaccination effects, clearance efficiencies due to drugs and growth rates have also been encoded in the model. The system is modelled as a Boolean network. Virtual deletion experiments, multiple parameter scans and analysis of the system's response to perturbations, indicate that disabling processes such as phagocytosis and phagolysosome fusion or cytokines such as TNF-alpha and IFN-gamma, greatly impaired bacterial clearance, while removing cytokines such as IL-10 alongside bacterial defence proteins such as SapM greatly favour clearance. Simulations indicate a high propensity of the pathogen to persist under different conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The primary aim of this thesis was the evaluation of the perfusion of normal organs in cats using contrast-enhanced ultrasound (CEUS), to serve as a reference for later clinical studies. Little is known of the use of CEUS in cats, especially regarding its safety and the effects of anesthesia on the procedure, thus, secondary aims here were to validate the quantitative analyzing method, to investigate the biological effects of CEUS on feline kidneys, and to assess the effect of anesthesia on splenic perfusion in cats undergoing CEUS. -- The studies were conducted on healthy, young, purpose-bred cats. CEUS of the liver, left kidney, spleen, pancreas, small intestine, and mesenteric lymph nodes was performed to characterize the normal perfusion of these organs on ten anesthetized, male cats. To validate the quantification method, the effects of placement and size of the region of interest (ROI) on perfusion parameters were investigated using CEUS: Three separate sets of ROIs were placed in the kidney cortex, varying in location, size, or depth. The biological effects of CEUS on feline kidneys were estimated by measuring urinary enzymatic activities, analyzing urinary specific gravity, pH, protein, creatinine, albumin, and sediment, and measuring plasma urea and creatinine concentrations before and after CEUS. Finally, the impact of anesthesia on contrast enhancement of the spleen was investigated by imaging cats with CEUS first awake and later under anesthesia on separate days. -- Typical perfusion patterns were found for each of the studied organs. The liver had a gradual and more heterogeneous perfusion pattern due to its dual blood flow and close proximity to the diaphragm. An obvious and statistically significant difference emerged in the perfusion between the kidney cortex and medulla. Enhancement in the spleen was very heterogeneous at the beginning of imaging, indicating focal dissimilarities in perfusion. No significant differences emerged in the perfusion parameters between the pancreas, small intestine, and mesenteric lymph nodes. -- The ROI placement and size were found to have an influence on the quantitative measurements of CEUS. Increasing the depth or the size of the ROI decreased the peak intensity value significantly, suggesting that where and how the ROI is placed does matter in quantitative analyses. --- A significant increase occurred in the urinary N-acetyl-β-D-glucosaminidase (NAG) to creatinine ratio after CEUS. No changes were noted in the serum biochemistry profile after CEUS, with the exception of a small decrease in blood urea concentration. The magnitude of the rise in the NAG/creatinine ratio was, however, less than the circadian variation reported earlier in healthy cats. Thus, the changes observed in the laboratory values after CEUS of the left kidney did not indicate any detrimental effects in kidneys. Heterogeneity of the spleen was observed to be less and time of first contrast appearance earlier in nonanesthetized cats than in anesthetized ones, suggesting that anesthesia increases heterogeneity of the feline spleen in CEUS. ---- In conclusion, the results suggest that CEUS can be used also in feline veterinary patients as an additional diagnostics aid. The perfusion patterns found in the imaged organs were typical and similar to those seen earlier in other species, with the exception of the heterogeneous perfusion pattern in the cat spleen. Differences in the perfusion between organs corresponded with physiology. Based on the results, estimation of focal perfusion defects of the spleen in cats should be performed with caution and after the disappearance of the initial heterogeneity, especially in anesthetized or sedated cats. Finally, these results indicate that CEUS can be used safely to analyze kidney perfusion also in cats. Future clinical studies are needed to evaluate the full potential of CEUS in feline medicine as a tool for diagnosing lesions in various organ systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Physics teachers are in a key position to form the attitudes and conceptions of future generations toward science and technology, as well as to educate future generations of scientists. Therefore, good teacher education is one of the key areas of physics departments education program. This dissertation is a contribution to the research-based development of high quality physics teacher education, designed to meet three central challenges of good teaching. The first challenge relates to the organization of physics content knowledge. The second challenge, connected to the first one, is to understand the role of experiments and models in (re)constructing the content knowledge of physics for purposes of teaching. The third challenge is to provide for pre-service physics teachers opportunities and resources for reflecting on or assessing their knowledge and experience about physics and physics education. This dissertation demonstrates how these challenges can be met when the content knowledge of physics, the relevant epistemological aspects of physics and the pedagogical knowledge of teaching and learning physics are combined. The theoretical part of this dissertation is concerned with designing two didactical reconstructions for purposes of physics teacher education: the didactical reconstruction of processes (DRoP) and the didactical reconstruction of structures (DRoS). This part starts with taking into account the required professional competencies of physics teachers, the pedagogical aspects of teaching and learning, and the benefits of the graphical ways of representing knowledge. Then it continues with the conceptual and philosophical analysis of physics, especially with the analysis of experiments and models role in constructing knowledge. This analysis is condensed in the form of the epistemological reconstruction of knowledge justification. Finally, these two parts are combined in the designing and production of the DRoP and DRoS. The DRoP captures the knowledge formation of physical concepts and laws in concise and simplified form while still retaining authenticity from the processes of how concepts have been formed. The DRoS is used for representing the structural knowledge of physics, the connections between physical concepts, quantities and laws, to varying extents. Both DRoP and DRoS are represented in graphical form by means of flow charts consisting of nodes and directed links connecting the nodes. The empirical part discusses two case studies that show how the three challenges are met through the use of DRoP and DRoS and how the outcomes of teaching solutions based on them are evaluated. The research approach is qualitative; it aims at the in-depth evaluation and understanding about the usefulness of the didactical reconstructions. The data, which were collected from the advanced course for prospective physics teachers during 20012006, consisted of DRoP and DRoS flow charts made by students and student interviews. The first case study discusses how student teachers used DRoP flow charts to understand the process of forming knowledge about the law of electromagnetic induction. The second case study discusses how student teachers learned to understand the development of physical quantities as related to the temperature concept by using DRoS flow charts. In both studies, the attention is focused on the use of DRoP and DRoS to organize knowledge and on the role of experiments and models in this organization process. The results show that students understanding about physics knowledge production improved and their knowledge became more organized and coherent. It is shown that the flow charts and the didactical reconstructions behind them had an important role in gaining these positive learning results. On the basis of the results reported here, the designed learning tools have been adopted as a standard part of the teaching solutions used in the physics teacher education courses in the Department of Physics, University of Helsinki.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis work, we design rigorous and efficient protocols/mechanisms for different types of wireless networks using a mechanism design [1] and game theoretic approach [2]. Our work can broadly be viewed in two parts. In the first part, we concentrate on ad hoc wireless networks [3] and [4]. In particular, we consider broadcast in these networks where each node is owned by independent and selfish users. Being selfish, these nodes do not forward the broadcast packets. All existing protocols for broadcast assume that nodes forward the transit packets. So, there is need for developing new broadcast protocols to overcome node selfishness. In our paper [5], we develop a strategy proof pricing mechanism which we call immediate predecessor node pricing mechanism (IPNPM) and an efficient new broadcast protocol based on IPNPM. We show the efficacy of our proposed broadcast protocol using simulation results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many applications of wireless ad hoc networks, wireless nodes are owned by rational and intelligent users. In this paper, we call nodes selfish if they are owned by independent users and their only objective is to maximize their individual goals. In such situations, it may not be possible to use the existing protocols for wireless ad hoc networks as these protocols assume that nodes follow the prescribed protocol without deviation. Stimulating cooperation among these nodes is an interesting and challenging problem. Providing incentives and pricing the transactions are well known approaches to stimulate cooperation. In this paper, we present a game theoretic framework for truthful broadcast protocol and strategy proof pricing mechanism called Immediate Predecessor Node Pricing Mechanism (IPNPM). The phrase strategy proof here means that truth revelation of cost is a weakly dominant-strategy (in game theoretic terms) for each node. In order to steer our mechanism-design approach towards practical implementation, we compute the payments to nodes using a distributed algorithm. We also propose a new protocol for broadcast in wireless ad hoc network with selfish nodes based on IPNPM. The features of the proposed broadcast protocol are reliability and a significantly reduced number of packet forwards compared to the number of network nodes, which in turn leads to less system-wide power consumption to broadcast a single packet. Our simulation results show the efficacy of the proposed broadcast protocol.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We address the problem of distributed space-time coding with reduced decoding complexity for wireless relay network. The transmission protocol follows a two-hop model wherein the source transmits a vector in the first hop and in the second hop the relays transmit a vector, which is a transformation of the received vector by a relay-specific unitary transformation. Design criteria is derived for this system model and codes are proposed that achieve full diversity. For a fixed number of relay nodes, the general system model considered in this paper admits code constructions with lower decoding complexity compared to codes based on some earlier system models.