588 resultados para random network coding
Resumo:
Wind speed measurement systems are sparse in the tropical regions of Australia. Given this, tropical cyclone wind speeds impacting communities are seldom measured and often only ‘guestimated’ by analysing the extent of damage to structures. In an attempt to overcome this dearth of data, a re-locatable network of anemometers to be deployed prior to tropical cyclone landfall is currently being developed. This paper discusses design criteria of the network’s tripods and tie down system, proposed deployment of the anemometers, instrumentation and data logging. Preliminary assessment of the anemometer response indicates a reliable system for measuring the spectral component of wind with frequencies of approximately 1 Hz. This system limitation highlights an important difference between the capabilities of modern instrumentation and that of the Dines anemometer (around 0.2 seconds) that was used to develop much of the design criteria within the Australian building code and wind loading standard.
Resumo:
Inhibition of cholesterol export from late endosomes causes cellular cholesterol imbalance, including cholesterol depletion in the trans-Golgi network (TGN). Here, using Chinese hamster ovary (CHO) Niemann-Pick type C1 (NPC1) mutant cell lines and human NPC1 mutant fibroblasts, we show that altered cholesterol levels at the TGN/endosome boundaries trigger Syntaxin 6 (Stx6) accumulation into VAMP3, transferrin, and Rab11-positive recycling endosomes (REs). This increases Stx6/VAMP3 interaction and interferes with the recycling of αVβ3 and α5β1 integrins and cell migration, possibly in a Stx6-dependent manner. In NPC1 mutant cells, restoration of cholesterol levels in the TGN, but not inhibition of VAMP3, restores the steady-state localization of Stx6 in the TGN. Furthermore, elevation of RE cholesterol is associated with increased amounts of Stx6 in RE. Hence, the fine-tuning of cholesterol levels at the TGN-RE boundaries together with a subset of cholesterol-sensitive SNARE proteins may play a regulatory role in cell migration and invasion.
Resumo:
Cancer can be defined as a deregulation or hyperactivity in the ongoing network of intracellular and extracellular signaling events. Reverse phase protein microarray technology may offer a new opportunity to measure and profile these signaling pathways, providing data on post-translational phosphorylation events not obtainable by gene microarray analysis. Treatment of ovarian epithelial carcinoma almost always takes place in a metastatic setting since unfortunately the disease is often not detected until later stages. Thus, in addition to elucidation of the molecular network within a tumor specimen, critical questions are to what extent do signaling changes occur upon metastasis and are there common pathway elements that arise in the metastatic microenvironment. For individualized combinatorial therapy, ideal therapeutic selection based on proteomic mapping of phosphorylation end points may require evaluation of the patient's metastatic tissue. Extending these findings to the bedside will require the development of optimized protocols and reference standards. We have developed a reference standard based on a mixture of phosphorylated peptides to begin to address this challenge.
Resumo:
Brain decoding of functional Magnetic Resonance Imaging data is a pattern analysis task that links brain activity patterns to the experimental conditions. Classifiers predict the neural states from the spatial and temporal pattern of brain activity extracted from multiple voxels in the functional images in a certain period of time. The prediction results offer insight into the nature of neural representations and cognitive mechanisms and the classification accuracy determines our confidence in understanding the relationship between brain activity and stimuli. In this paper, we compared the efficacy of three machine learning algorithms: neural network, support vector machines, and conditional random field to decode the visual stimuli or neural cognitive states from functional Magnetic Resonance data. Leave-one-out cross validation was performed to quantify the generalization accuracy of each algorithm on unseen data. The results indicated support vector machine and conditional random field have comparable performance and the potential of the latter is worthy of further investigation.
Resumo:
This paper examines a buffer scheme to mitigate the negative impacts of power-conditioned loads on network voltage and transient stabilities. The scheme is based on the use of battery energy-storage systems in the buffers. The storage systems ensure that protected loads downstream of the buffers can ride through upstream voltage sags and swells. Also, by controlling the buffers to operate in either constant impedance or constant power modes, power is absorbed or injected by the storage systems. The scheme thereby regulates the rotor-angle deviations of generators and enhances network transient stability. A computational method is described in which the capacity of the storage systems is determined to achieve simultaneously the above dual objectives of load ride-through and stability enhancement. The efficacy of the resulting scheme is demonstrated through numerical examples.
Resumo:
Multi-party key agreement protocols indirectly assume that each principal equally contributes to the final form of the key. In this paper we consider three malleability attacks on multi-party key agreement protocols. The first attack, called strong key control allows a dishonest principal (or a group of principals) to fix the key to a pre-set value. The second attack is weak key control in which the key is still random, but the set from which the key is drawn is much smaller than expected. The third attack is named selective key control in which a dishonest principal (or a group of dishonest principals) is able to remove a contribution of honest principals to the group key. The paper discusses the above three attacks on several key agreement protocols, including DH (Diffie-Hellman), BD (Burmester-Desmedt) and JV (Just-Vaudenay). We show that dishonest principals in all three protocols can weakly control the key, and the only protocol which does not allow for strong key control is the DH protocol. The BD and JV protocols permit to modify the group key by any pair of neighboring principals. This modification remains undetected by honest principals.
Resumo:
The 3′ UTRs of eukaryotic genes participate in a variety of post-transcriptional (and some transcriptional) regulatory interactions. Some of these interactions are well characterised, but an undetermined number remain to be discovered. While some regulatory sequences in 3′ UTRs may be conserved over long evolutionary time scales, others may have only ephemeral functional significance as regulatory profiles respond to changing selective pressures. Here we propose a sensitive segmentation methodology for investigating patterns of composition and conservation in 3′ UTRs based on comparison of closely related species. We describe encodings of pairwise and three-way alignments integrating information about conservation, GC content and transition/transversion ratios and apply the method to three closely related Drosophila species: D. melanogaster, D. simulans and D. yakuba. Incorporating multiple data types greatly increased the number of segment classes identified compared to similar methods based on conservation or GC content alone. We propose that the number of segments and number of types of segment identified by the method can be used as proxies for functional complexity. Our main finding is that the number of segments and segment classes identified in 3′ UTRs is greater than in the same length of protein-coding sequence, suggesting greater functional complexity in 3′ UTRs. There is thus a need for sustained and extensive efforts by bioinformaticians to delineate functional elements in this important genomic fraction. C code, data and results are available upon request.
Resumo:
Toxicity is a major concern for anti-neoplastic drugs, with much of the existing pharmacopoeia being characterized by a very narrow therapeutic index. 'Network-targeted' combination therapy is a promising new concept in cancer therapy, whereby therapeutic index might be improved by targeting multiple nodes in a cell's signaling network, rather than a single node. Here, we examine the potential of this novel approach, illustrating how therapeutic benefit could be achieved with smaller doses of the necessary agents.
Resumo:
In Crypto’95, Micali and Sidney proposed a method for shared generation of a pseudo-random function f(·) among n players in such a way that for all the inputs x, any u players can compute f(x) while t or fewer players fail to do so, where 0⩽trandom collection of functions, among the n players, each player gets a subset of S, in such a way that any u players together hold all the secret seeds in S while any t or fewer players will lack at least one element from S. The pseudo-random function is then computed as where fsi(·)'s are poly-random functions. One question raised by Micali and Sidney is how to distribute the secret seeds satisfying the above condition such that the number of seeds, d, is as small as possible. In this paper, we continue the work of Micali and Sidney. We first provide a general framework for shared generation of pseudo-random function using cumulative maps. We demonstrate that the Micali–Sidney scheme is a special case of this general construction. We then derive an upper and a lower bound for d. Finally we give a simple, yet efficient, approximation greedy algorithm for generating the secret seeds S in which d is close to the optimum by a factor of at most u ln 2.
Resumo:
In Crypto’95, Micali and Sidney proposed a method for shared generation of a pseudo-random function f(·) among n players in such a way that for all the inputs x, any u players can compute f(x) while t or fewer players fail to do so, where 0 ≤ t < u ≤ n. The idea behind the Micali-Sidney scheme is to generate and distribute secret seeds S = s1, . . . , sd of a poly-random collection of functions, among the n players, each player gets a subset of S, in such a way that any u players together hold all the secret seeds in S while any t or fewer players will lack at least one element from S. The pseudo-random function is then computed as where f s i (·)’s are poly-random functions. One question raised by Micali and Sidney is how to distribute the secret seeds satisfying the above condition such that the number of seeds, d, is as small as possible. In this paper, we continue the work of Micali and Sidney. We first provide a general framework for shared generation of pseudo-random function using cumulative maps. We demonstrate that the Micali-Sidney scheme is a special case of this general construction.We then derive an upper and a lower bound for d. Finally we give a simple, yet efficient, approximation greedy algorithm for generating the secret seeds S in which d is close to the optimum by a factor of at most u ln 2.
Resumo:
Social Networks (SN) users have various privacy requirements to protect their information; to address this issue, a six-stage thematic analysis of scholarly articles related to SN user privacy concerns were synthesized. Then this research combines mixed methods research employing the strengths of quantitative and qualitative research to investigate general SN users, and thus construct a new set of ?ve primary and Twenty-?ve secondary SN user privacy requirements. Such an approach has been rarely used to examine the privacy requirements. Factor analysis results show superior agreement with theoretical predictions and signi?cant improvement over previous alternative models of SN user privacy requirements. This research presented here has the potential to provide for the development of more sophisticated privacy controls which will increase the ability of SN users to: specify their rights in SNs and to determine the protection of their own SN data.
Resumo:
Traffic incidents are key contributors to non-recurrent congestion, potentially generating significant delay. Factors that influence the duration of incidents are important to understand so that effective mitigation strategies can be implemented. To identify and quantify the effects of influential factors, a methodology for studying total incident duration based on historical data from an ‘integrated database’ is proposed. Incident duration models are developed using a selected freeway segment in the Southeast Queensland, Australia network. The models include incident detection and recovery time as components of incident duration. A hazard-based duration modelling approach is applied to model incident duration as a function of a variety of factors that influence traffic incident duration. Parametric accelerated failure time survival models are developed to capture heterogeneity as a function of explanatory variables, with both fixed and random parameters specifications. The analysis reveals that factors affecting incident duration include incident characteristics (severity, type, injury, medical requirements, etc.), infrastructure characteristics (roadway shoulder availability), time of day, and traffic characteristics. The results indicate that event type durations are uniquely different, thus requiring different responses to effectively clear them. Furthermore, the results highlight the presence of unobserved incident duration heterogeneity as captured by the random parameter models, suggesting that additional factors need to be considered in future modelling efforts.
Resumo:
A novel gray-box neural network model (GBNNM), including multi-layer perception (MLP) neural network (NN) and integrators, is proposed for a model identification and fault estimation (MIFE) scheme. With the GBNNM, both the nonlinearity and dynamics of a class of nonlinear dynamic systems can be approximated. Unlike previous NN-based model identification methods, the GBNNM directly inherits system dynamics and separately models system nonlinearities. This model corresponds well with the object system and is easy to build. The GBNNM is embedded online as a normal model reference to obtain the quantitative residual between the object system output and the GBNNM output. This residual can accurately indicate the fault offset value, so it is suitable for differing fault severities. To further estimate the fault parameters (FPs), an improved extended state observer (ESO) using the same NNs (IESONN) from the GBNNM is proposed to avoid requiring the knowledge of ESO nonlinearity. Then, the proposed MIFE scheme is applied for reaction wheels (RW) in a satellite attitude control system (SACS). The scheme using the GBNNM is compared with other NNs in the same fault scenario, and several partial loss of effect (LOE) faults with different severities are considered to validate the effectiveness of the FP estimation and its superiority.
Resumo:
Discussions of public diplomacy in recent years have paid a growing amount of attention to networks. This network perspective is understood to provide insights into various issues of public diplomacy, such as its effects, credibility, reputation, identity and narratives. This paper applies the network idea to analyse China’s Confucius Institutes initiative. It understands Confucius Institutes as a global network and argues that this network structure has potential implications for the operation of public and cultural diplomacy that are perhaps underestimated in existing accounts of Chinese cultural diplomacy. In particular, it is noted that the specific setup of Confucius Institutes requires the engagement of local stakeholders, in a way that is less centralised and more networked than comparable cultural diplomacy institutions. At the same time, the development of a more networked for of public cultural diplomacy is challenged in practice by both practical issues and the configuration of China’s state-centric public diplomacy system informed by the political constitution of the Chinese state.