530 resultados para Default mode network
Resumo:
The operation of Autonomous Underwater Vehicles (AUVs) within underwater sensor network fields provides an opportunity to reuse the network infrastructure for long baseline localisation of the AUV. Computationally efficient localisation can be accomplished using off-the-shelf hardware that is comparatively inexpensive and which could already be deployed in the environment for monitoring purposes. This paper describes the development of a particle filter based localisation system which is implemented onboard an AUV in real-time using ranging information obtained from an ad-hoc underwater sensor network. An experimental demonstration of this approach was conducted in a lake with results presented illustrating network communication and localisation performance.
Resumo:
Many large-scale GNSS CORS networks have been deployed around the world to support various commercial and scientific applications. To make use of these networks for real-time kinematic positioning services, one of the major challenges is the ambiguity resolution (AR) over long inter-station baselines in the presence of considerable atmosphere biases. Usually, the widelane ambiguities are fixed first, followed by the procedure of determination of the narrowlane ambiguity integers based on the ionosphere-free model in which the widelane integers are introduced as known quantities. This paper seeks to improve the AR performance over long baseline through efficient procedures for improved float solutions and ambiguity fixing. The contribution is threefold: (1) instead of using the ionosphere-free measurements, the absolute and/or relative ionospheric constraints are introduced in the ionosphere-constrained model to enhance the model strength, thus resulting in the better float solutions; (2) the realistic widelane ambiguity precision is estimated by capturing the multipath effects due to the observation complexity, leading to improvement of reliability of widelane AR; (3) for the narrowlane AR, the partial AR for a subset of ambiguities selected according to the successively increased elevation is applied. For fixing the scalar ambiguity, an error probability controllable rounding method is proposed. The established ionosphere-constrained model can be efficiently solved based on the sequential Kalman filter. It can be either reduced to some special models simply by adjusting the variances of ionospheric constraints, or extended with more parameters and constraints. The presented methodology is tested over seven baselines of around 100 km from USA CORS network. The results show that the new widelane AR scheme can obtain the 99.4 % successful fixing rate with 0.6 % failure rate; while the new rounding method of narrowlane AR can obtain the fix rate of 89 % with failure rate of 0.8 %. In summary, the AR reliability can be efficiently improved with rigorous controllable probability of incorrectly fixed ambiguities.
Resumo:
Detecting anomalies in the online social network is a significant task as it assists in revealing the useful and interesting information about the user behavior on the network. This paper proposes a rule-based hybrid method using graph theory, Fuzzy clustering and Fuzzy rules for modeling user relationships inherent in online-social-network and for identifying anomalies. Fuzzy C-Means clustering is used to cluster the data and Fuzzy inference engine is used to generate rules based on the cluster behavior. The proposed method is able to achieve improved accuracy for identifying anomalies in comparison to existing methods.
Resumo:
Safety concerns in the operation of autonomous aerial systems require safe-landing protocols be followed during situations where the mission should be aborted due to mechanical or other failure. This article presents a pulse-coupled neural network (PCNN) to assist in the vegetation classification in a vision-based landing site detection system for an unmanned aircraft. We propose a heterogeneous computing architecture and an OpenCL implementation of a PCNN feature generator. Its performance is compared across OpenCL kernels designed for CPU, GPU, and FPGA platforms. This comparison examines the compute times required for network convergence under a variety of images to determine the plausibility for real-time feature detection.
Resumo:
Network coding is a method for achieving channel capacity in networks. The key idea is to allow network routers to linearly mix packets as they traverse the network so that recipients receive linear combinations of packets. Network coded systems are vulnerable to pollution attacks where a single malicious node floods the network with bad packets and prevents the receiver from decoding correctly. Cryptographic defenses to these problems are based on homomorphic signatures and MACs. These proposals, however, cannot handle mixing of packets from multiple sources, which is needed to achieve the full benefits of network coding. In this paper we address integrity of multi-source mixing. We propose a security model for this setting and provide a generic construction.
Resumo:
For TREC Crowdsourcing 2011 (Stage 2) we propose a networkbased approach for assigning an indicative measure of worker trustworthiness in crowdsourced labelling tasks. Workers, the gold standard and worker/gold standard agreements are modelled as a network. For the purpose of worker trustworthiness assignment, a variant of the PageRank algorithm, named TurkRank, is used to adaptively combine evidence that suggests worker trustworthiness, i.e., agreement with other trustworthy co-workers and agreement with the gold standard. A single parameter controls the importance of co-worker agreement versus gold standard agreement. The TurkRank score calculated for each worker is incorporated with a worker-weighted mean label aggregation.
Resumo:
At NDSS 2012, Yan et al. analyzed the security of several challenge-response type user authentication protocols against passive observers, and proposed a generic counting based statistical attack to recover the secret of some counting based protocols given a number of observed authentication sessions. Roughly speaking, the attack is based on the fact that secret (pass) objects appear in challenges with a different probability from non-secret (decoy) objects when the responses are taken into account. Although they mentioned that a protocol susceptible to this attack should minimize this difference, they did not give details as to how this can be achieved barring a few suggestions. In this paper, we attempt to fill this gap by generalizing the attack with a much more comprehensive theoretical analysis. Our treatment is more quantitative which enables us to describe a method to theoretically estimate a lower bound on the number of sessions a protocol can be safely used against the attack. Our results include 1) two proposed fixes to make counting protocols practically safe against the attack at the cost of usability, 2) the observation that the attack can be used on non-counting based protocols too as long as challenge generation is contrived, 3) and two main design principles for user authentication protocols which can be considered as extensions of the principles from Yan et al. This detailed theoretical treatment can be used as a guideline during the design of counting based protocols to determine their susceptibility to this attack. The Foxtail protocol, one of the protocols analyzed by Yan et al., is used as a representative to illustrate our theoretical and experimental results.
Resumo:
Obtaining attribute values of non-chosen alternatives in a revealed preference context is challenging because non-chosen alternative attributes are unobserved by choosers, chooser perceptions of attribute values may not reflect reality, existing methods for imputing these values suffer from shortcomings, and obtaining non-chosen attribute values is resource intensive. This paper presents a unique Bayesian (multiple) Imputation Multinomial Logit model that imputes unobserved travel times and distances of non-chosen travel modes based on random draws from the conditional posterior distribution of missing values. The calibrated Bayesian (multiple) Imputation Multinomial Logit model imputes non-chosen time and distance values that convincingly replicate observed choice behavior. Although network skims were used for calibration, more realistic data such as supplemental geographically referenced surveys or stated preference data may be preferred. The model is ideally suited for imputing variation in intrazonal non-chosen mode attributes and for assessing the marginal impacts of travel policies, programs, or prices within traffic analysis zones.
Resumo:
In order to minimize the number of load shedding in a Microgrid during autonomous operation, islanded neighbour microgrids can be interconnected if they are on a self-healing network and an extra generation capacity is available in Distributed Energy Resources (DER) in one of the microgrids. In this way, the total load in the system of interconnected microgrids can be shared by all the DERs within these microgrids. However, for this purpose, carefully designed self-healing and supply restoration control algorithm, protection systems and communication infrastructure are required at the network and microgrid levels. In this chapter, first a hierarchical control structure is discussed for interconnecting the neighbour autonomous microgrids where the introduced primary control level is the main focus. Through the developed primary control level, it demonstrates how the parallel DERs in the system of multiple interconnected autonomous microgrids can properly share the load in the system. This controller is designed such that the converter-interfaced DERs operate in a voltage-controlled mode following a decentralized power sharing algorithm based on droop control. The switching in the converters is controlled using a linear quadratic regulator based state feedback which is more stable than conventional proportional integrator controllers and this prevents instability among parallel DERs when two microgrids are interconnected. The efficacy of the primary control level of DERs in the system of multiple interconnected autonomous microgrids is validated through simulations considering detailed dynamic models of DERs and converters.
Resumo:
While both the restoration of the blood supply and an appropriate local mechanical environment are critical for uneventful bone healing, their influence on each other remains unclear. Human bone fracture haematomas (<72h post-trauma) were cultivated for 3 days in fibrin matrices, with or without cyclic compression. Conditioned medium from these cultures enhanced the formation of vessel-like networks by HMEC-1 cells, and mechanical loading further elevated it, without affecting the cells’ metabolic activity. While haematomas released the angiogenesis-regulators, VEGF and TGF-β1, their concentrations were not affected by mechanical loading. However, direct cyclic stretching of the HMEC-1 cells decreased network formation. The appearance of the networks and a trend towards elevated VEGF under strain suggested physical disruption rather than biochemical modulation as the responsible mechanism. Thus, early fracture haematomas and their mechanical loading increase the paracrine stimulation of endothelial organisation in vitro, but direct periodic strains may disrupt or impair vessel assembly in otherwise favourable conditions.
Resumo:
The appropriateness of default investment options in participant-directed retirement plans like 401(k) has been in sharp focus given that most participants fail to nominate an investment option to direct their contributions. In United States (US), prior to the Pension Protection Act (PPA) of 2006, plan fiduciaries often selected a money market fund as the default option. Whilst this ‘low risk and low return’ investment option was considered to be a ‘safe’ choice by many fiduciaries who were fearful of litigation risk, it was heavily criticized for resulting in inadequate wealth at retirement, particularly when retirees were living much longer and facing inflation risk (see, for example, Viceira, 2008; Skinner, 2009)...
Resumo:
This special issue of Networking Science focuses on Next Generation Network (NGN) that enables the deployment of access independent services over converged fixed and mobile networks. NGN is a packet-based network and uses the Internet protocol (IP) to transport the various types of traffic (voice, video, data and signalling). NGN facilitates easy adoption of distributed computing applications by providing high speed connectivity in a converged networked environment. It also makes end user devices and applications highly intelligent and efficient by empowering them with programmability and remote configuration options. However, there are a number of important challenges in provisioning next generation network technologies in a converged communication environment. Some preliminary challenges include those that relate to QoS, switching and routing, management and control, and security which must be addressed on an urgent or emergency basis. The consideration of architectural issues in the design and pro- vision of secure services for NGN deserves special attention and hence is the main theme of this special issue.
Resumo:
In this study, experimental and numerical investigations have been conducted to explore the possibility of using A0 mode in Lamb waves to detect the position of delamination in carbon fiber reinforced plastic (CFRP) laminated beams. An experimental technique for exciting and sensing the pure A0 mode has been developed. By measuring the propagation speed of A0 mode and traveling time of a signal reflected from the delamination, its location can be identified experimentally and numerically. Moreover, the numerical analysis has been extended to gain a better understanding of the complex interaction between A0 mode and a long delamination case.
Resumo:
The objective of this research was to develop a model to estimate future freeway pavement construction costs in Henan Province, China. A comprehensive set of factors contributing to the cost of freeway pavement construction were included in the model formulation. These factors comprehensively reflect the characteristics of region and topography and altitude variation, the cost of labour, material, and equipment, and time-related variables such as index numbers of labour prices, material prices and equipment prices. An Artificial Neural Network model using the Back-Propagation learning algorithm was developed to estimate the cost of freeway pavement construction. A total of 88 valid freeway cases were obtained from freeway construction projects let by the Henan Transportation Department during the period 1994−2007. Data from a random selection of 81 freeway cases were used to train the Neural Network model and the remaining data were used to test the performance of the Neural Network model. The tested model was used to predict freeway pavement construction costs in 2010 based on predictions of input values. In addition, this paper provides a suggested correction for the prediction of the value for the future freeway pavement construction costs. Since the change in future freeway pavement construction cost is affected by many factors, the predictions obtained by the proposed method, and therefore the model, will need to be tested once actual data are obtained.
Resumo:
Working primarily within the natural landscape, this practice-led research project explored connections between the artist's visual and perceptual experience of a journey or place while simultaneously emphasizing the capacity for digital media to create a perceptual dissonance. By exploring concepts of time, viewpoint, duration of sequences and the manipulation of traditional constructs of stop-frame animation, the practical work created a cognitive awareness of the elements of the journey through optical sensations. The work allowed an opportunity to reflect on the nature of visual experience and its mediation through images. The project recontextualized the selected mediums of still photography, animation and projection within contemporary display modes of multiple screen installations by analysing relationships between the experienced and the perceived. The resulting works added to current discourse on the interstices between still and moving imagery in a digital world.