896 resultados para Android,Peer to Peer,Wifi,Mesh Network


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chitosan is a natural polymer with antimicrobial activity. Chitosan causes plasma membrane permeabilization and induction of intracellular reactive oxygen species (ROS) in Neurospora crassa. We have determined the transcriptional profile of N. crassa to chitosan and identified the main gene targets involved in the cellular response to this compound. Global network analyses showed membrane, transport and oxidoreductase activity as key nodes affected by chitosan. Activation of oxidative metabolism indicates the importance of ROS and cell energy together with plasma membrane homeostasis in N. crassa response to chitosan. Deletion strain analysis of chitosan susceptibility pointed NCU03639 encoding a class 3 lipase, involved in plasma membrane repair by lipid replacement, and NCU04537 a MFS monosaccharide transporter related to assimilation of simple sugars, as main gene targets of chitosan. NCU10521, a glutathione S-transferase-4 involved in the generation of reducing power for scavenging intracellular ROS is also a determinant chitosan gene target. Ca2+ increased tolerance to chitosan in N. crassa. Growth of NCU10610 (fig 1 domain) and SYT1 (a synaptotagmin) deletion strains was significantly increased by Ca2+ in the presence of chitosan. Both genes play a determinant role in N. crassa membrane homeostasis. Our results are of paramount importance for developing chitosan as an antifungal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The most straightforward European single energy market design would entail a European system operator regulated by a single European regulator. This would ensure the predictable development of rules for the entire EU, significantly reducing regulatory uncertainty for electricity sector investments. But such a first-best market design is unlikely to be politically realistic in the European context for three reasons. First, the necessary changes compared to the current situation are substantial and would produce significant redistributive effects. Second, a European solution would deprive member states of the ability to manage their energy systems nationally. And third, a single European solution might fall short of being well-tailored to consumers’ preferences, which differ substantially across the EU. To nevertheless reap significant benefits from an integrated European electricity market, we propose the following blueprint: First, we suggest adding a European system-management layer to complement national operation centres and help them to better exchange information about the status of the system, expected changes and planned modifications. The ultimate aim should be to transfer the day-to-day responsibility for the safe and economic operation of the system to the European control centre. To further increase efficiency, electricity prices should be allowed to differ between all network points between and within countries. This would enable throughput of electricity through national and international lines to be safely increased without any major investments in infrastructure. Second, to ensure the consistency of national network plans and to ensure that they contribute to providing the infrastructure for a functioning single market, the role of the European ten year network development plan (TYNDP) needs to be upgraded by obliging national regulators to only approve projects planned at European level unless they can prove that deviations are beneficial. This boosted role of the TYNDP would need to be underpinned by resolving the issues of conflicting interests and information asymmetry. Therefore, the network planning process should be opened to all affected stakeholders (generators, network owners and operators, consumers, residents and others) and enable the European Agency for the Cooperation of Energy Regulators (ACER) to act as a welfare-maximising referee. An ultimate political decision by the European Parliament on the entire plan will open a negotiation process around selecting alternatives and agreeing compensation. This ensures that all stakeholders have an interest in guaranteeing a certain degree of balance of interest in the earlier stages. In fact, transparent planning, early stakeholder involvement and democratic legitimisation are well suited for minimising as much as possible local opposition to new lines. Third, sharing the cost of network investments in Europe is a critical issue. One reason is that so far even the most sophisticated models have been unable to identify the individual long-term net benefit in an uncertain environment. A workable compromise to finance new network investments would consist of three components: (i) all easily attributable cost should be levied on the responsible party; (ii) all network users that sit at nodes that are expected to receive more imports through a line extension should be obliged to pay a share of the line extension cost through their network charges; (iii) the rest of the cost is socialised to all consumers. Such a cost-distribution scheme will involve some intra-European redistribution from the well-developed countries (infrastructure-wise) to those that are catching up. However, such a scheme would perform this redistribution in a much more efficient way than the Connecting Europe Facility’s ad-hoc disbursements to politically chosen projects, because it would provide the infrastructure that is really needed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A test of the ability of a probabilistic neural network to classify deposits into types on the basis of deposit tonnage and average Cu, Mo, Ag, Au, Zn, and Pb grades is conducted. The purpose is to examine whether this type of system might serve as a basis for integrating geoscience information available in large mineral databases to classify sites by deposit type. Benefits of proper classification of many sites in large regions are relatively rapid identification of terranes permissive for deposit types and recognition of specific sites perhaps worthy of exploring further. Total tonnages and average grades of 1,137 well-explored deposits identified in published grade and tonnage models representing 13 deposit types were used to train and test the network. Tonnages were transformed by logarithms and grades by square roots to reduce effects of skewness. All values were scaled by subtracting the variable's mean and dividing by its standard deviation. Half of the deposits were selected randomly to be used in training the probabilistic neural network and the other half were used for independent testing. Tests were performed with a probabilistic neural network employing a Gaussian kernel and separate sigma weights for each class (type) and each variable (grade or tonnage). Deposit types were selected to challenge the neural network. For many types, tonnages or average grades are significantly different from other types, but individual deposits may plot in the grade and tonnage space of more than one type. Porphyry Cu, porphyry Cu-Au, and porphyry Cu-Mo types have similar tonnages and relatively small differences in grades. Redbed Cu deposits typically have tonnages that could be confused with porphyry Cu deposits, also contain Cu and, in some situations, Ag. Cyprus and kuroko massive sulfide types have about the same tonnages. Cu, Zn, Ag, and Au grades. Polymetallic vein, sedimentary exhalative Zn-Pb, and Zn-Pb skarn types contain many of the same metals. Sediment-hosted Au, Comstock Au-Ag, and low-sulfide Au-quartz vein types are principally Au deposits with differing amounts of Ag. Given the intent to test the neural network under the most difficult conditions, an overall 75% agreement between the experts and the neural network is considered excellent. Among the largestclassification errors are skarn Zn-Pb and Cyprus massive sulfide deposits classed by the neuralnetwork as kuroko massive sulfides—24 and 63% error respectively. Other large errors are the classification of 92% of porphyry Cu-Mo as porphyry Cu deposits. Most of the larger classification errors involve 25 or fewer training deposits, suggesting that some errors might be the result of small sample size. About 91% of the gold deposit types were classed properly and 98% of porphyry Cu deposits were classes as some type of porphyry Cu deposit. An experienced economic geologist would not make many of the classification errors that were made by the neural network because the geologic settings of deposits would be used to reduce errors. In a separate test, the probabilistic neural network correctly classed 93% of 336 deposits in eight deposit types when trained with presence or absence of 58 minerals and six generalized rock types. The overall success rate of the probabilistic neural network when trained on tonnage and average grades would probably be more than 90% with additional information on the presence of a few rock types.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a growing need for innovative methods of dealing with complex, social problems. New types of collaborative efforts have emerged as a result of the inability of more traditional bureaucratic hierarchical arrangements such as departmental program, to resolve these problems. Network structures are one such arrangement that Is at the forefront of this movement. Although collaboration through network structures establishes an innovative response to dealing with social issues, there remains an expectation that outcomes and processes are based on traditional ways of working. It is necessary for practitioners and policy makers alike to begin to understand the realities of what can be expected from network structures in order to maximize the benefits of these unique mechanisms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Consider a network of unreliable links, modelling for example a communication network. Estimating the reliability of the network-expressed as the probability that certain nodes in the network are connected-is a computationally difficult task. In this paper we study how the Cross-Entropy method can be used to obtain more efficient network reliability estimation procedures. Three techniques of estimation are considered: Crude Monte Carlo and the more sophisticated Permutation Monte Carlo and Merge Process. We show that the Cross-Entropy method yields a speed-up over all three techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ad hoc wireless sensor networks (WSNs) are formed from self-organising configurations of distributed, energy constrained, autonomous sensor nodes. The service lifetime of such sensor nodes depends on the power supply and the energy consumption, which is typically dominated by the communication subsystem. One of the key challenges in unlocking the potential of such data gathering sensor networks is conserving energy so as to maximize their post deployment active lifetime. This thesis described the research carried on the continual development of the novel energy efficient Optimised grids algorithm that increases the WSNs lifetime and improves on the QoS parameters yielding higher throughput, lower latency and jitter for next generation of WSNs. Based on the range and traffic relationship the novel Optimised grids algorithm provides a robust traffic dependent energy efficient grid size that minimises the cluster head energy consumption in each grid and balances the energy use throughout the network. Efficient spatial reusability allows the novel Optimised grids algorithm improves on network QoS parameters. The most important advantage of this model is that it can be applied to all one and two dimensional traffic scenarios where the traffic load may fluctuate due to sensor activities. During traffic fluctuations the novel Optimised grids algorithm can be used to re-optimise the wireless sensor network to bring further benefits in energy reduction and improvement in QoS parameters. As the idle energy becomes dominant at lower traffic loads, the new Sleep Optimised grids model incorporates the sleep energy and idle energy duty cycles that can be implemented to achieve further network lifetime gains in all wireless sensor network models. Another key advantage of the novel Optimised grids algorithm is that it can be implemented with existing energy saving protocols like GAF, LEACH, SMAC and TMAC to further enhance the network lifetimes and improve on QoS parameters. The novel Optimised grids algorithm does not interfere with these protocols, but creates an overlay to optimise the grids sizes and hence transmission range of wireless sensor nodes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The fast spread of the Internet and the increasing demands of the service are leading to radical changes in the structure and management of underlying telecommunications systems. Active networks (ANs) offer the ability to program the network on a per-router, per-user, or even per-packet basis, thus promise greater flexibility than current networks. To make this new network paradigm of active network being widely accepted, a lot of issues need to be solved. Management of the active network is one of the challenges. This thesis investigates an adaptive management solution based on genetic algorithm (GA). The solution uses a distributed GA inspired by bacterium on the active nodes within an active network, to provide adaptive management for the network, especially the service provision problems associated with future network. The thesis also reviews the concepts, theories and technologies associated with the management solution. By exploring the implementation of these active nodes in hardware, this thesis demonstrates the possibility of implementing a GA based adaptive management in the real network that being used today. The concurrent programming language, Handel-C, is used for the description of the design system and a re-configurable computer platform based on a FPGA process element is used for the hardware implementation. The experiment results demonstrate both the availability of the hardware implementation and the efficiency of the proposed management solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Roma population has become a policy issue highly debated in the European Union (EU). The EU acknowledges that this ethnic minority faces extreme poverty and complex social and economic problems. 52% of the Roma population live in extreme poverty, 75% in poverty (Soros Foundation, 2007, p. 8), with a life expectancy at birth of about ten years less than the majority population. As a result, Romania has received a great deal of policy attention and EU funding, being eligible for 19.7 billion Euros from the EU for 2007-2013. Yet progress is slow; it is debated whether Romania's government and companies were capable to use these funds (EurActiv.ro, 2012). Analysing three case studies, this research looks at policy implementation in relation to the role of Roma networks in different geographical regions of Romania. It gives insights about how to get things done in complex settings and it explains responses to the Roma problem as a „wicked‟ policy issue. This longitudinal research was conducted between 2008 and 2011, comprising 86 semi-structured interviews, 15 observations, and documentary sources and using a purposive sample focused on institutions responsible for implementing social policies for Roma: Public Health Departments, School Inspectorates, City Halls, Prefectures, and NGOs. Respondents included: governmental workers, academics, Roma school mediators, Roma health mediators, Roma experts, Roma Councillors, NGOs workers, and Roma service users. By triangulating the data collected with various methods and applied to various categories of respondents, a comprehensive and precise representation of Roma network practices was created. The provisions of the 2001 „Governmental Strategy to Improve the Situation of the Roma Population‟ facilitated forming a Roma network by introducing special jobs in local and central administration. In different counties, resources, people, their skills, and practices varied. As opposed to the communist period, a new Roma elite emerged: social entrepreneurs set the pace of change by creating either closed cliques or open alliances and by using more or less transparent practices. This research deploys the concept of social/institutional entrepreneurs to analyse how key actors influence clique and alliance formation and functioning. Significantly, by contrasting three case studies, it shows that both closed cliques and open alliances help to achieve public policy network objectives, but that closed cliques can also lead to failure to improve the health and education of Roma people in a certain region.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report for the first time on the limitations in the operational power range of network traffic in the presence of heterogeneous 28-Gbaud polarization-multiplexed quadrature amplitude modulation (PM-mQAM) channels in a nine-channel dynamic optical mesh network. In particular, we demonstrate that transponders which autonomously select a modulation order and launch power to optimize their own performance will have a severe impact on copropagating network traffic. Our results also suggest that altruistic transponder operation may offer even lower penalties than fixed launch power operation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper builds on Granovetter's distinction between strong and weak ties [Granovetter, M. S. 1973. The strength of weak ties. Amer. J. Sociol. 78(6) 1360–1380] in order to respond to recent calls for a more dynamic and processual understanding of networks. The concepts of potential and latent tie are deductively identified, and their implications for understanding how and why networks emerge, evolve, and change are explored. A longitudinal empirical study conducted with companies operating in the European motorsport industry reveals that firms take strategic actions to search for potential ties and reactivate latent ties in order to solve problems of network redundancy and overload. Examples are given, and their characteristics are examined to provide theoretical elaboration of the relationship between the types of tie and network evolution. These conceptual and empirical insights move understanding of the managerial challenge of building effective networks beyond static structural contingency models of optimal network forms to highlight the processes and capabilities of dynamic relationship building and network development. In so doing, this paper highlights the interrelationship between search and redundancy and the scope for strategic action alongside path dependence and structural influences on network processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper analyzes the theme of knowledge transfer in supply chain management. The aim of this study is to present the social network analysis (SNA) as an useful tool to study knowledge networks within supply chain, to monitor knowledge flows and to identify the accumulating knowledge nodes of the networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The importance of interorganizational networks in supporting or hindering the achievement of organizational objectives is now widely acknowledged. Network research is directed at understanding network processes and structures, and their impact upon performance. A key process is learning. The concepts of individual, group and organizational learning are long established. This article argues that learning might also usefully be regarded as occurring at a fourth system level, the interorganizational network. The concept of network learning - learning by a group of organizations as a group - is presented, and differentiated from other types of learning, notably interorganizational learning (learning in interorganizational contexts). Four cases of network learning are identified and analysed to provide insights into network learning processes and outcomes. It is proposed that 'network learning episode' offers a suitable unit of analysis for the empirical research needed to develop our understanding of this potentially important concept.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Returnable transport equipment (RTE) such as pallets form an integral part of the supply chain and poor management leads to costly losses. Companies often address this matter by outsourcing the management of RTE to logistics service providers (LSPs). LSPs are faced with the task to provide logistical expertise to reduce RTE related waste, whilst differentiating their own services to remain competitive. In the current challenging economic climate, the role of the LSP to deliver innovative ways to achieve competitive advantage has never been so important. It is reported that radio frequency identification (RFID) application to RTE enables LSPs such as DHL to gain competitive advantage and offer clients improvements such as loss reduction, process efficiency improvement and effective security. However, the increased visibility and functionality of RFID enabled RTE requires further investigation in regards to decision‐making. The distributed nature of the RTE network favours a decentralised decision‐making format. Agents are an effective way to represent objects from the bottom‐up, capturing the behaviour and enabling localised decision‐making. Therefore, an agent based system is proposed to represent the RTE network and utilise the visibility and data gathered from RFID tags. Two types of agents are developed in order to represent the trucks and RTE, which have bespoke rules and algorithms in order to facilitate negotiations. The aim is to create schedules, which integrate RTE pick‐ups as the trucks go back to the depot. The findings assert that: - agent based modelling provides an autonomous tool, which is effective in modelling RFID enabled RTE in a decentralised utilising the real‐time data facility. ‐ the RFID enabled RTE model developed enables autonomous agent interaction, which leads to a feasible schedule integrating both forward and reverse flows for each RTE batch. ‐ the RTE agent scheduling algorithm developed promotes the utilisation of RTE by including an automatic return flow for each batch of RTE, whilst considering the fleet costs andutilisation rates. ‐ the research conducted contributes an agent based platform, which LSPs can use in order to assess the most appropriate strategies to implement for RTE network improvement for each of their clients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ALBA 2002 Call for Papers asks the question ‘How do organizational learning and knowledge management contribute to organizational innovation and change?’. Intuitively, we would argue, the answer should be relatively straightforward as links between learning and change, and knowledge management and innovation, have long been commonly assumed to exist. On the basis of this assumption, theories of learning tend to focus ‘within organizations’, and assume a transfer of learning from individual to organization which in turn leads to change. However, empirically, we find these links are more difficult to articulate. Organizations exist in complex embedded economic, political, social and institutional systems, hence organizational change (or innovation) may be influenced by learning in this wider context. Based on our research in this wider interorganizational setting, we first make the case for the notion of network learning that we then explore to develop our appreciation of change in interorganizational networks, and how it may be facilitated. The paper begins with a brief review of lite rature on learning in the organizational and interorganizational context which locates our stance on organizational learning versus the learning organization, and social, distributed versus technical, centred views of organizational learning and knowledge. Developing from the view that organizational learning is “a normal, if problematic, process in every organization” (Easterby-Smith, 1997: 1109), we introduce the notion of network learning: learning by a group of organizations as a group. We argue this is also a normal, if problematic, process in organizational relationships (as distinct from interorganizational learning), which has particular implications for network change. Part two of the paper develops our analysis, drawing on empirical data from two studies of learning. The first study addresses the issue of learning to collaborate between industrial customers and suppliers, leading to the case for network learning. The second, larger scale study goes on to develop this theme, examining learning around several major change issues in a healthcare service provider network. The learning processes and outcomes around the introduction of a particularly controversial and expensive technology are described, providing a rich and contrasting case with the first study. In part three, we then discuss the implications of this work for change, and for facilitating change. Conclusions from the first study identify potential interventions designed to facilitate individual and organizational learning within the customer organization to develop individual and organizational ‘capacity to collaborate’. Translated to the network example, we observe that network change entails learning at all levels – network, organization, group and individual. However, presenting findings in terms of interventions is less meaningful in an interorganizational network setting given: the differences in authority structures; the less formalised nature of the network setting; and the importance of evaluating performance at the network rather than organizational level. Academics challenge both the idea of managing change and of managing networks. Nevertheless practitioners are faced with the issue of understanding and in fluencing change in the network setting. Thus we conclude that a network learning perspective is an important development in our understanding of organizational learning, capability and change, locating this in the wider context in which organizations are embedded. This in turn helps to develop our appreciation of facilitating change in interorganizational networks, both in terms of change issues (such as introducing a new technology), and change orientation and capability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Epilepsy is one of the most common neurological disorders, a large fraction of which is resistant to pharmacotherapy. In this light, understanding the mechanisms of epilepsy and its intractable forms in particular could create new targets for pharmacotherapeutic intervention. The current project explores the dynamic changes in neuronal network function in the chronic temporal lobe epilepsy (TLE) in rat and human brain in vitro. I focused on the process of establishment of epilepsy (epileptogenesis) in the temporal lobe. Rhythmic behaviour of the hippocampal neuronal networks in healthy animals was explored using spontaneous oscillations in the gamma frequency band (SγO). The use of an improved brain slice preparation technique resulted in the natural occurence (in the absence of pharmacological stimulation) of rhythmic activity, which was then pharmacologically characterised and compared to other models of gamma oscillations (KA- and CCh-induced oscillations) using local field potential recording technique. The results showed that SγO differed from pharmacologically driven models, suggesting higher physiological relevance of SγO. Network activity was also explored in the medial entorhinal cortex (mEC), where spontaneous slow wave oscillations (SWO) were detected. To investigate the course of chronic TLE establishment, a refined Li-pilocarpine-based model of epilepsy (RISE) was developed. The model significantly reduced animal mortality and demonstrated reduced intensity, yet high morbidy with almost 70% mean success rate of developing spontaneous recurrent seizures. We used SγO to characterize changes in the hippocampal neuronal networks throughout the epileptogenesis. The results showed that the network remained largely intact, demonstrating the subtle nature of the RISE model. Despite this, a reduction in network activity was detected during the so-called latent (no seizure) period, which was hypothesized to occur due to network fragmentation and an abnormal function of kainate receptors (KAr). We therefore explored the function of KAr by challenging SγO with kainic acid (KA). The results demonstrated a remarkable decrease in KAr response during the latent period, suggesting KAr dysfunction or altered expression, which will be further investigated using a variety of electrophysiological and immunocytochemical methods. The entorhinal cortex, together with the hippocampus, is known to play an important role in the TLE. Considering this, we investigated neuronal network function of the mEC during epileptogenesis using SWO. The results demonstrated a striking difference in AMPAr function, with possible receptor upregulation or abnormal composition in the early development of epilepsy. Alterations in receptor function inevitably lead to changes in the network function, which may play an important role in the development of epilepsy. Preliminary investigations were made using slices of human brain tissue taken following surgery for intratctable epilepsy. Initial results showed that oscillogenesis could be induced in human brain slices and that such network activity was pharmacologically similar to that observed in rodent brain. Overall, our findings suggest that excitatory glutamatergic transmission is heavily involved in the process of epileptogenesis. Together with other types of receptors, KAr and AMPAr contribute to epilepsy establishment and may be the key to uncovering its mechanism.