918 resultados para Continuous random network
Resumo:
Wires of YBa2Cu3O7-x were fabricated by extrusion using a hydroxypropyl methylcellulose (HPMC) binder. As little as 2 wt.% binder was added to an oxide prepared by a novel co-precipitation process, to produce a plastic mass which readily gave continuous extrusion of long lengths of wire in a reproducible fashion. Critical temperatures of 92K were obtained for wires given optimum high-temperature heat treatments. Critical current densities greater than 1000 A cm-1 were measured at 77.3K using heat treatments at around 910°C for 10h. These transport critical current densities, measured on centimeter-long wires, were obtained with microstructures showing a relatively dense and uniform distribution of randomly oriented, small YBa2Cu3O7-x grains. © 1993.
Resumo:
In this paper we investigate the distribution of the product of Rayleigh distributed random variables. Considering the Mellin-Barnes inversion formula and using the saddle point approach we obtain an upper bound for the product distribution. The accuracy of this tail-approximation increases as the number of random variables in the product increase.
Resumo:
Free association norms indicate that words are organized into semantic/associative neighborhoods within a larger network of words and links that bind the net together. We present evidence indicating that memory for a recent word event can depend on implicitly and simultaneously activating related words in its neighborhood. Processing a word during encoding primes its network representation as a function of the density of the links in its neighborhood. Such priming increases recall and recognition and can have long lasting effects when the word is processed in working memory. Evidence for this phenomenon is reviewed in extralist cuing, primed free association, intralist cuing, and single-item recognition tasks. The findings also show that when a related word is presented to cue the recall of a studied word, the cue activates it in an array of related words that distract and reduce the probability of its selection. The activation of the semantic network produces priming benefits during encoding and search costs during retrieval. In extralist cuing recall is a negative function of cue-to-distracter strength and a positive function of neighborhood density, cue-to-target strength, and target-to cue strength. We show how four measures derived from the network can be combined and used to predict memory performance. These measures play different roles in different tasks indicating that the contribution of the semantic network varies with the context provided by the task. We evaluate spreading activation and quantum-like entanglement explanations for the priming effect produced by neighborhood density.
Resumo:
Assessing and prioritising cost-effective strategies to mitigate the impacts of traffic incidents and accidents on non-recurrent congestion on major roads represents a significant challenge for road network managers. This research examines the influence of numerous factors associated with incidents of various types on their duration. It presents a comprehensive traffic incident data mining and analysis by developing an incident duration model based on twelve months of incident data obtained from the Australian freeway network. Parametric accelerated failure time (AFT) survival models of incident duration were developed, including log-logistic, lognormal, and Weibul-considering both fixed and random parameters, as well as a Weibull model with gamma heterogeneity. The Weibull AFT models with random parameters were appropriate for modelling incident duration arising from crashes and hazards. A Weibull model with gamma heterogeneity was most suitable for modelling incident duration of stationary vehicles. Significant variables affecting incident duration include characteristics of the incidents (severity, type, towing requirements, etc.), and location, time of day, and traffic characteristics of the incident. Moreover, the findings reveal no significant effects of infrastructure and weather on incident duration. A significant and unique contribution of this paper is that the durations of each type of incident are uniquely different and respond to different factors. The results of this study are useful for traffic incident management agencies to implement strategies to reduce incident duration, leading to reduced congestion, secondary incidents, and the associated human and economic losses.
Resumo:
Organizations from every industry sector seek to enhance their business performance and competitiveness through the deployment of contemporary information systems (IS), such as Enterprise Systems (ERP). Investments in ERP are complex and costly, attracting scrutiny and pressure to justify their cost. Thus, IS researchers highlight the need for systematic evaluation of information system success, or impact, which has resulted in the introduction of varied models for evaluating information systems. One of these systematic measurement approaches is the IS-Impact Model introduced by a team of researchers at Queensland University of technology (QUT) (Gable, Sedera, & Chan, 2008). The IS-Impact Model is conceptualized as a formative, multidimensional index that consists of four dimensions. Gable et al. (2008) define IS-Impact as "a measure at a point in time, of the stream of net benefits from the IS, to date and anticipated, as perceived by all key-user-groups" (p.381). The IT Evaluation Research Program (ITE-Program) at QUT has grown the IS-Impact Research Track with the central goal of conducting further studies to enhance and extend the IS-Impact Model. The overall goal of the IS-Impact research track at QUT is "to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice" (Gable, 2009). In order to achieve that, the IS-Impact research track advocates programmatic research having the principles of tenacity, holism, and generalizability through extension research strategies. This study was conducted within the IS-Impact Research Track, to further generalize the IS-Impact Model by extending it to the Saudi Arabian context. According to Hofsted (2012), the national culture of Saudi Arabia is significantly different from the Australian national culture making the Saudi Arabian culture an interesting context for testing the external validity of the IS-Impact Model. The study re-visits the IS-Impact Model from the ground up. Rather than assume the existing instrument is valid in the new context, or simply assess its validity through quantitative data collection, the study takes a qualitative, inductive approach to re-assessing the necessity and completeness of existing dimensions and measures. This is done in two phases: Exploratory Phase and Confirmatory Phase. The exploratory phase addresses the first research question of the study "Is the IS-Impact Model complete and able to capture the impact of information systems in Saudi Arabian Organization?". The content analysis, used to analyze the Identification Survey data, indicated that 2 of the 37 measures of the IS-Impact Model are not applicable for the Saudi Arabian Context. Moreover, no new measures or dimensions were identified, evidencing the completeness and content validity of the IS-Impact Model. In addition, the Identification Survey data suggested several concepts related to IS-Impact, the most prominent of which was "Computer Network Quality" (CNQ). The literature supported the existence of a theoretical link between IS-Impact and CNQ (CNQ is viewed as an antecedent of IS-Impact). With the primary goal of validating the IS-Impact model within its extended nomological network, CNQ was introduced to the research model. The Confirmatory Phase addresses the second research question of the study "Is the Extended IS-Impact Model Valid as a Hierarchical Multidimensional Formative Measurement Model?". The objective of the Confirmatory Phase was to test the validity of IS-Impact Model and CNQ Model. To achieve that, IS-Impact, CNQ, and IS-Satisfaction were operationalized in a survey instrument, and then the research model was assessed by employing the Partial Least Squares (PLS) approach. The CNQ model was validated as a formative model. Similarly, the IS-Impact Model was validated as a hierarchical multidimensional formative construct. However, the analysis indicated that one of the IS-Impact Model indicators was insignificant and can be removed from the model. Thus, the resulting Extended IS-Impact Model consists of 4 dimensions and 34 measures. Finally, the structural model was also assessed against two aspects: explanatory and predictive power. The analysis revealed that the path coefficient between CNQ and IS-Impact is significant with t-value= (4.826) and relatively strong with â = (0.426) with CNQ explaining 18% of the variance in IS-Impact. These results supported the hypothesis that CNQ is antecedent of IS-Impact. The study demonstrates that the quality of Computer Network affects the quality of the Enterprise System (ERP) and consequently the impacts of the system. Therefore, practitioners should pay attention to the Computer Network quality. Similarly, the path coefficient between IS-Impact and IS-Satisfaction was significant t-value = (17.79) and strong â = (0.744), with IS-Impact alone explaining 55% of the variance in Satisfaction, consistent with results of the original IS-Impact study (Gable et al., 2008). The research contributions include: (a) supporting the completeness and validity of IS-Impact Model as a Hierarchical Multi-dimensional Formative Measurement Model in the Saudi Arabian context, (b) operationalizing Computer Network Quality as conceptualized in the ITU-T Recommendation E.800 (ITU-T, 1993), (c) validating CNQ as a formative measurement model and as an antecedent of IS Impact, and (d) conceptualizing and validating IS-Satisfaction as a reflective measurement model and as an immediate consequence of IS Impact. The CNQ model provides a framework to perceptually measure Computer Network Quality from multiple perspectives. The CNQ model features an easy-to-understand, easy-to-use, and economical survey instrument.
Resumo:
Securing IT infrastructures of our modern lives is a challenging task because of their increasing complexity, scale and agile nature. Monolithic approaches such as using stand-alone firewalls and IDS devices for protecting the perimeter cannot cope with complex malwares and multistep attacks. Collaborative security emerges as a promising approach. But, research results in collaborative security are not mature, yet, and they require continuous evaluation and testing. In this work, we present CIDE, a Collaborative Intrusion Detection Extension for the network security simulation platform ( NeSSi 2 ). Built-in functionalities include dynamic group formation based on node preferences, group-internal communication, group management and an approach for handling the infection process for malware-based attacks. The CIDE simulation environment provides functionalities for easy implementation of collaborating nodes in large-scale setups. We evaluate the group communication mechanism on the one hand and provide a case study and evaluate our collaborative security evaluation platform in a signature exchange scenario on the other.
Resumo:
We present a virtual test bed for network security evaluation in mid-scale telecommunication networks. Migration from simulation scenarios towards the test bed is supported and enables researchers to evaluate experiments in a more realistic environment. We provide a comprehensive interface to manage, run and evaluate experiments. On basis of a concrete example we show how the proposed test bed can be utilized.
Resumo:
Internet services are important part of daily activities for most of us. These services come with sophisticated authentication requirements which may not be handled by average Internet users. The management of secure passwords for example creates an extra overhead which is often neglected due to usability reasons. Furthermore, password-based approaches are applicable only for initial logins and do not protect against unlocked workstation attacks. In this paper, we provide a non-intrusive identity verification scheme based on behavior biometrics where keystroke dynamics based-on free-text is used continuously for verifying the identity of a user in real-time. We improved existing keystroke dynamics based verification schemes in four aspects. First, we improve the scalability where we use a constant number of users instead of whole user space to verify the identity of target user. Second, we provide an adaptive user model which enables our solution to take the change of user behavior into consideration in verification decision. Next, we identify a new distance measure which enables us to verify identity of a user with shorter text. Fourth, we decrease the number of false results. Our solution is evaluated on a data set which we have collected from users while they were interacting with their mail-boxes during their daily activities.
Resumo:
There are different ways to authenticate humans, which is an essential prerequisite for access control. The authentication process can be subdivided into three categories that rely on something someone i) knows (e.g. password), and/or ii) has (e.g. smart card), and/or iii) is (biometric features). Besides classical attacks on password solutions and the risk that identity-related objects can be stolen, traditional biometric solutions have their own disadvantages such as the requirement of expensive devices, risk of stolen bio-templates etc. Moreover, existing approaches provide the authentication process usually performed only once initially. Non-intrusive and continuous monitoring of user activities emerges as promising solution in hardening authentication process: iii-2) how so. behaves. In recent years various keystroke dynamic behavior-based approaches were published that are able to authenticate humans based on their typing behavior. The majority focuses on so-called static text approaches, where users are requested to type a previously defined text. Relatively few techniques are based on free text approaches that allow a transparent monitoring of user activities and provide continuous verification. Unfortunately only few solutions are deployable in application environments under realistic conditions. Unsolved problems are for instance scalability problems, high response times and error rates. The aim of this work is the development of behavioral-based verification solutions. Our main requirement is to deploy these solutions under realistic conditions within existing environments in order to enable a transparent and free text based continuous verification of active users with low error rates and response times.
Resumo:
We propose CIMD (Collaborative Intrusion and Malware Detection), a scheme for the realization of collaborative intrusion detection approaches. We argue that teams, respectively detection groups with a common purpose for intrusion detection and response, improve the measures against malware. CIMD provides a collaboration model, a decentralized group formation and an anonymous communication scheme. Participating agents can convey intrusion detection related objectives and associated interests for collaboration partners. These interests are based on intrusion objectives and associated interests for collaboration partners. These interests are based on intrusion detection related ontology, incorporating network and hardware configurations and detection capabilities. Anonymous Communication provided by CIMD allows communication beyond suspicion, i.e. the adversary can not perform better than guessing an IDS to be the source of a message at random. The evaluation takes place with the help of NeSSi² (www.nessi2.de), the Network Security Simulator, a dedicated environment for analysis of attacks and countermeasures in mid-scale and large-scale networks. A CIMD prototype is being built based on the JIAC agent framework(www.jiac.de).
Resumo:
As one of the measures for decreasing road traffic noise in a city, the control of the traffic flow and the physical distribution is considered. To conduct the measure effectively, the model for predicting the traffic flow in the citywide road network is necessary. In this study, the existing model named AVENUE was used as a traffic flow prediction model. The traffic flow model was integrated with the road vehicles' sound power model and the sound propagation model, and the new road traffic noise prediction model was established. As a case study, the prediction model was applied to the road network of Tsukuba city in Japan and the noise map of the city was made. To examine the calculation accuracy of the noise map, the calculated values of the noise at the main roads were compared with the measured values. As a result, it was found that there was a possibility that the high accuracy noise map of the city could be made by using the noise prediction model developed in this study.
Resumo:
The existence of the Macroscopic Fundamental Diagram (MFD), which relates network space-mean density and flow, has been shown in urban networks under homogeneous traffic conditions. Since the MFD represents the area-wide network traffic performances, studies on perimeter control strategies and an area traffic state estimation utilizing the MFD concept has been reported. The key requirements for the well-defined MFD is the homogeneity of the area wide traffic condition, which is not universally expected in real world. For the practical application of the MFD concept, several researchers have identified the influencing factors for network homogeneity. However, they did not explicitly take drivers’ behaviour under real time information provision into account, which has a significant impact on the shape of the MFD. This research aims to demonstrate the impact of drivers’ route choice behaviour on network performance by employing the MFD as a measurement. A microscopic simulation is chosen as an experimental platform. By changing the ratio of en-route informed drivers and pre-trip informed drivers as well as by taking different route choice parameters, various scenarios are simulated in order to investigate how drivers’ adaptation to the traffic congestion influences the network performance and the MFD shape. This study confirmed and addressed the impact of information provision on the MFD shape and highlighted the significance of the route choice parameter setting as an influencing factor in the MFD analysis.
Resumo:
The use of Wireless Sensor Networks (WSNs) for Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data synchronization error and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. This paper first presents a brief review of the most inherent uncertainties of the SHM-oriented WSN platforms and then investigates their effects on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when employing merged data from multiple tests. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and Data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Experimental accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as clean data before being contaminated by different data pollutants in sequential manner to simulate practical SHM-oriented WSN uncertainties. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with SHM-WSN uncertainties. Finally, the use of the measurement channel projection for the time-domain OMA techniques and the preferred combination of the OMA techniques to cope with the SHM-WSN uncertainties is recommended.
Resumo:
Historically a significant gap between male and female wages has existed in the Australian labour market. Indeed this wage differential was institutionalised in the 1912 arbitration decision which determined that the basic female wage would be set at between 54 and 66 per cent of the male wage. More recently however, the 1969 and 1972 Equal Pay Cases determined that male/female wage relativities should be based upon the premise of equal pay for work of equal value. It is important to note that the mere observation that average wages differ between males and females is not sine qua non evidence of sex discrimination. Economists restrict the definition of wage discrimination to cases where two distinct groups receive different average remuneration for reasons unrelated to differences in productivity characteristics. This paper extends previous studies of wage discrimination in Australia (Chapman and Mulvey, 1986; Haig, 1982) by correcting the estimated male/female wage differential for the existence of non-random sampling. Previous Australian estimates of male/female human capital basedwage specifications together with estimates of the corresponding wage differential all suffer from a failure to address this issue. If the sample of females observed to be working does not represent a random sample then the estimates of the male/female wage differential will be both biased and inconsistent.