974 resultados para Default mode network


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This doctoral dissertation introduces an algorithm for constructing the most probable Bayesian network from data for small domains. The algorithm is used to show that a popular goodness criterion for the Bayesian networks has a severe sensitivity problem. The dissertation then proposes an information theoretic criterion that avoids the problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The following problem is considered. Given the locations of the Central Processing Unit (ar;the terminals which have to communicate with it, to determine the number and locations of the concentrators and to assign the terminals to the concentrators in such a way that the total cost is minimized. There is alao a fixed cost associated with each concentrator. There is ail upper limit to the number of terminals which can be connected to a concentrator. The terminals can be connected directly to the CPU also In this paper it is assumed that the concentrators can bo located anywhere in the area A containing the CPU and the terminals. Then this becomes a multimodal optimization problem. In the proposed algorithm a stochastic automaton is used as a search device to locate the minimum of the multimodal cost function . The proposed algorithm involves the following. The area A containing the CPU and the terminals is divided into an arbitrary number of regions (say K). An approximate value for the number of concentrators is assumed (say m). The optimum number is determined by iteration later The m concentrators can be assigned to the K regions in (mk) ways (m > K) or (km) ways (K>m).(All possible assignments are feasible, i.e. a region can contain 0,1,…, to concentrators). Each possible assignment is assumed to represent a state of the stochastic variable structure automaton. To start with, all the states are assigned equal probabilities. At each stage of the search the automaton visits a state according to the current probability distribution. At each visit the automaton selects a 'point' inside that state with uniform probability. The cost associated with that point is calculated and the average cost of that state is updated. Then the probabilities of all the states are updated. The probabilities are taken to bo inversely proportional to the average cost of the states After a certain number of searches the search probabilities become stationary and the automaton visits a particular state again and again. Then the automaton is said to have converged to that state Then by conducting a local gradient search within that state the exact locations of the concentrators are determined This algorithm was applied to a set of test problems and the results were compared with those given by Cooper's (1964, 1967) EAC algorithm and on the average it was found that the proposed algorithm performs better.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Network data packet capture and replay capabilities are basic requirements for forensic analysis of faults and security-related anomalies, as well as for testing and development. Cyber-physical networks, in which data packets are used to monitor and control physical devices, must operate within strict timing constraints, in order to match the hardware devices' characteristics. Standard network monitoring tools are unsuitable for such systems because they cannot guarantee to capture all data packets, may introduce their own traffic into the network, and cannot reliably reproduce the original timing of data packets. Here we present a high-speed network forensics tool specifically designed for capturing and replaying data traffic in Supervisory Control and Data Acquisition systems. Unlike general-purpose "packet capture" tools it does not affect the observed network's data traffic and guarantees that the original packet ordering is preserved. Most importantly, it allows replay of network traffic precisely matching its original timing. The tool was implemented by developing novel user interface and back-end software for a special-purpose network interface card. Experimental results show a clear improvement in data capture and replay capabilities over standard network monitoring methods and general-purpose forensics solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Distributed Network Protocol v3.0 (DNP3) is one of the most widely used protocols to control national infrastructure. The move from point-to-point serial connections to Ethernet-based network architectures, allowing for large and complex critical infrastructure networks. However, networks and con- figurations change, thus auditing tools are needed to aid in critical infrastructure network discovery. In this paper we present a series of intrusive techniques used for reconnaissance on DNP3 critical infrastructure. Our algorithms will discover DNP3 outstation slaves along with their DNP3 addresses, their corresponding master, and class object configurations. To validate our presented DNP3 reconnaissance algorithms and demonstrate it’s practicality, we present an implementation of a software tool using a DNP3 plug-in for Scapy. Our implementation validates the utility of our DNP3 reconnaissance technique. Our presented techniques will be useful for penetration testing, vulnerability assessments and DNP3 network discovery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The nonlinear mode coupling between two co-directional quasi-harmonic Rayleigh surface waves on an isotropic solid is analysed using the method of multiple scales. This procedure yields a system of six semi-linear hyperbolic partial differential equations with the same principal part governing the slow variations in the (complex) amplitudes of the two fundamental, the two second harmonic and the two combination frequency waves at the second stage of the perturbation expansion. A numerical solution of these equations for excitation by monochromatic signals at two arbitrary frequencies, indicates that there is a continuous transfer of energy back and forth among the fundamental, second harmonic and combination frequency waves due to mode coupling. The mode coupling tends to be more pronounced as the frequencies of the interacting waves approach each other.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The binding of chromomycin A3, an antitumour antibiotic, to various DNA and chromatin isolated from mouse and rat liver, mouse fibrosarcoma and Yoshida ascites sarcoma cells was studied spectrophotometrically at 29°C in 10−2 M Tris-HCl buffer, pH 8.0, containing small amounts of MgCl2 (4.5 · 10−5−25 · 10−5 M). An isobestic point at 415 nm was observed when chromomycin A3 was gradually titrated with Image and its spectrum shifted towards higher wavelength. The rates and extent of these spectral changes were found to be dependent on the concentration of Mg2+. The change in absorbance at 440 nm was used to calculate apparent binding constant (Ka p M−1) and sites per nucleotide (n) from Scatchard plots for various DNA and chromatins. As expected, values of n for chromatin (0.06–0.10) were found to be lower than that found for corresponding DNA (0.10–0.15). Apparently no such correlation exists between binding constants (Ka p M−1 · 10−4) of DNA (6.4–11.2) and of chromatin (3.1–8.3), but Ka p M−1 of chromatin isolated from mouse fibrosarcoma and Yoshida ascites sarcoma are 1.5–3 times higher than that found for mouse and rat liver chromatin. These differences may be taken to indicate structural difference in nucleoprotein complexes caused by neoplasia. The relevance of this finding to tumour suppressive action of chromomycin A3 is discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2,3-Dihydroxybenzoic acid has been shown to be oxidized via the 3-oxoadipate pathway in the leaves of Tecoma stans. The formation of 2-carboxy-cis,cis-muconic acid, a muconolactone, 3-oxoadipic acid and carbon dioxide during its metabolism has been demonstrated using an extract of Tecoma leaves. The first reaction of the pathway, viz., the conversion of 2,3-dihydroxybenzoate to 2-carboxy-cis,cis-muconic acid has been shown to be catalysed by an enzyme designated as 2,3-dihydroxybenzoate 2,3-oxygenase. The enzyme has been partially purified and a few of its properties studied. The enzyme is very labile with a half-life of 3--4 h. It is maximally active with 2,3-dihydroxybenzoate as the substrate and does not exhibit any activity with catechol, 4-methyl catechol, 3,4-dihydroxybenzoic acid, etc. However, 2,3-dihydroxy-p-toluate and 2,3-dihydroxy-p-cumate are also oxidized by the enzyme by about 38% and 28% respectively, compared to 2,3-dihydroxybenzoate. Sulfhydryl reagents inhibit the enzyme reaction and the inhibition can be prevented by preincubation of the enzyme with the substrate. Substrate also affords protection to the enzyme against thermal inactivation. Sulfhydryl compounds strongly inhibit the reaction and the inhibition cannot be prevented by preincubation of the enzyme with its substrates. Data on the effect of metal ions as well as metal chelating agents suggest that copper is the metal cofactor of the enzyme. Evidence is presented which suggests that iron may not be participating in the overall catalytic mechanism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A purified antitumor protein from the proteinaceous crystal of Bacillus thuringiensis subsp. thuringiensis inhibits the growth of Yoshida ascites sarcoma both in vivo and in vitro. Exogenous respiration of the tumor cells was unaffected by the protein at a concentration as high as 500 µg/ml. The antitumor protein inhibits the uptake and incorporation of labeled precursors into macromolecules. However, the ratio of incorporation over uptake is not affected by the protein. Further, the protein brings about the leakage of 260-nm-absorbing material, proteins, and 32P-labeled cellular constituents from the Yoshida ascites sarcoma cells. The results show that the action of the antitumor protein appears to alter the cellular permeability of the tumor cells.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Amateurs are found in arts, sports, or entertainment, where they are linked with professional counterparts and inspired by celebrities. Despite the growing number of CSCW studies in amateur and professional domains, little is known about how technologies facilitate collaboration between these groups. Drawing from a 1.5-year field study in the domain of bodybuilding, this paper describes the collaboration between and within amateurs, professionals, and celebrities on social network sites. Social network sites help individuals to improve their performance in competitions, extend their support network, and gain recognition for their achievements. The findings show that amateurs benefit the most from online collaboration, whereas collaboration shifts from social network sites to offline settings as individuals develop further in their professional careers. This shift from online to offline settings constitutes a novel finding, which extends previous work on social network sites that has looked at groups of amateurs and professionals in isolation. As a contribution to practice, we highlight design factors that address this shift to offline settings and foster collaboration between and within groups.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research on social network sites has examined how people integrate offline and online life, but with a particular emphasis on their use by friendship groups. We extend earlier work by examining a case in which offline ties are non-existent, but online ties strong. Our case is a study of bodybuilders, who explore their passion with like-minded offline 'strangers' in tightly integrated online communities. We show that the integration of offline and online life supports passion-centric activities, such as bodybuilding.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Social network sites (SNSs) such as Facebook have the potential to persuade people to adopt a lifestyle based on exercise and healthy nutrition. We report the findings of a qualitative study of an SNS for bodybuilders, looking at how bodybuilders present themselves online and how they orchestrate the SNS with their offline activities. Discussing the persuasive element of appreciation, we aim to extend previous work on persuasion in web 2.0 technologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The higher education sector is under ongoing pressure to demonstrate quality and efficacy of educational provision, including graduate outcomes. Preparing students as far as possible for the world of professional work has become one of the central tasks of contemporary universities. This challenging task continues to receive significant attention by policy makers and scholars, in the broader contexts of widespread labour market uncertainty and massification of the higher education system (Tomlinson, 2012). In contrast to the previous era of the university, in which ongoing professional employment was virtually guaranteed to university-qualified individuals, contemporary graduates must now be proactive and flexible. They must adapt to a job market that may not accept them immediately, and has continually shifting requirements (Clarke, 2008). The saying goes that rather than seeking security in employment, graduates must now “seek security in employability”. However, as I will argue in this chapter, the current curricular and pedagogic approaches universities adopt, and indeed the core structural characteristics of university-based education, militate against the development of the capabilities that graduates require now and into the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ongoing habitat loss and fragmentation threaten much of the biodiversity that we know today. As such, conservation efforts are required if we want to protect biodiversity. Conservation budgets are typically tight, making the cost-effective selection of protected areas difficult. Therefore, reserve design methods have been developed to identify sets of sites, that together represent the species of conservation interest in a cost-effective manner. To be able to select reserve networks, data on species distributions is needed. Such data is often incomplete, but species habitat distribution models (SHDMs) can be used to link the occurrence of the species at the surveyed sites to the environmental conditions at these locations (e.g. climatic, vegetation and soil conditions). The probability of the species occurring at unvisited location is next predicted by the model, based on the environmental conditions of those sites. The spatial configuration of reserve networks is important, because habitat loss around reserves can influence the persistence of species inside the network. Since species differ in their requirements for network configuration, the spatial cohesion of networks needs to be species-specific. A way to account for species-specific requirements is to use spatial variables in SHDMs. Spatial SHDMs allow the evaluation of the effect of reserve network configuration on the probability of occurrence of the species inside the network. Even though reserves are important for conservation, they are not the only option available to conservation planners. To enhance or maintain habitat quality, restoration or maintenance measures are sometimes required. As a result, the number of conservation options per site increases. Currently available reserve selection tools do however not offer the ability to handle multiple, alternative options per site. This thesis extends the existing methodology for reserve design, by offering methods to identify cost-effective conservation planning solutions when multiple, alternative conservation options are available per site. Although restoration and maintenance measures are beneficial to certain species, they can be harmful to other species with different requirements. This introduces trade-offs between species when identifying which conservation action is best applied to which site. The thesis describes how the strength of such trade-offs can be identified, which is useful for assessing consequences of conservation decisions regarding species priorities and budget. Furthermore, the results of the thesis indicate that spatial SHDMs can be successfully used to account for species-specific requirements for spatial cohesion - in the reserve selection (single-option) context as well as in the multi-option context. Accounting for the spatial requirements of multiple species and allowing for several conservation options is however complicated, due to trade-offs in species requirements. It is also shown that spatial SHDMs can be successfully used for gaining information on factors that drive a species spatial distribution. Such information is valuable to conservation planning, as better knowledge on species requirements facilitates the design of networks for species persistence. This methods and results described in this thesis aim to improve species probabilities of persistence, by taking better account of species habitat and spatial requirements. Many real-world conservation planning problems are characterised by a variety of conservation options related to protection, restoration and maintenance of habitat. Planning tools therefore need to be able to incorporate multiple conservation options per site, in order to continue the search for cost-effective conservation planning solutions. Simultaneously, the spatial requirements of species need to be considered. The methods described in this thesis offer a starting point for combining these two relevant aspects of conservation planning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this research we modelled computer network devices to ensure their communication behaviours meet various network standards. By modelling devices as finite-state machines and examining their properties in a range of configurations, we discovered a flaw in a common network protocol and produced a technique to improve organisations' network security against data theft.