601 resultados para Continuous random network
Resumo:
Internet services are important part of daily activities for most of us. These services come with sophisticated authentication requirements which may not be handled by average Internet users. The management of secure passwords for example creates an extra overhead which is often neglected due to usability reasons. Furthermore, password-based approaches are applicable only for initial logins and do not protect against unlocked workstation attacks. In this paper, we provide a non-intrusive identity verification scheme based on behavior biometrics where keystroke dynamics based-on free-text is used continuously for verifying the identity of a user in real-time. We improved existing keystroke dynamics based verification schemes in four aspects. First, we improve the scalability where we use a constant number of users instead of whole user space to verify the identity of target user. Second, we provide an adaptive user model which enables our solution to take the change of user behavior into consideration in verification decision. Next, we identify a new distance measure which enables us to verify identity of a user with shorter text. Fourth, we decrease the number of false results. Our solution is evaluated on a data set which we have collected from users while they were interacting with their mail-boxes during their daily activities.
Resumo:
There are different ways to authenticate humans, which is an essential prerequisite for access control. The authentication process can be subdivided into three categories that rely on something someone i) knows (e.g. password), and/or ii) has (e.g. smart card), and/or iii) is (biometric features). Besides classical attacks on password solutions and the risk that identity-related objects can be stolen, traditional biometric solutions have their own disadvantages such as the requirement of expensive devices, risk of stolen bio-templates etc. Moreover, existing approaches provide the authentication process usually performed only once initially. Non-intrusive and continuous monitoring of user activities emerges as promising solution in hardening authentication process: iii-2) how so. behaves. In recent years various keystroke dynamic behavior-based approaches were published that are able to authenticate humans based on their typing behavior. The majority focuses on so-called static text approaches, where users are requested to type a previously defined text. Relatively few techniques are based on free text approaches that allow a transparent monitoring of user activities and provide continuous verification. Unfortunately only few solutions are deployable in application environments under realistic conditions. Unsolved problems are for instance scalability problems, high response times and error rates. The aim of this work is the development of behavioral-based verification solutions. Our main requirement is to deploy these solutions under realistic conditions within existing environments in order to enable a transparent and free text based continuous verification of active users with low error rates and response times.
Resumo:
We propose CIMD (Collaborative Intrusion and Malware Detection), a scheme for the realization of collaborative intrusion detection approaches. We argue that teams, respectively detection groups with a common purpose for intrusion detection and response, improve the measures against malware. CIMD provides a collaboration model, a decentralized group formation and an anonymous communication scheme. Participating agents can convey intrusion detection related objectives and associated interests for collaboration partners. These interests are based on intrusion objectives and associated interests for collaboration partners. These interests are based on intrusion detection related ontology, incorporating network and hardware configurations and detection capabilities. Anonymous Communication provided by CIMD allows communication beyond suspicion, i.e. the adversary can not perform better than guessing an IDS to be the source of a message at random. The evaluation takes place with the help of NeSSi² (www.nessi2.de), the Network Security Simulator, a dedicated environment for analysis of attacks and countermeasures in mid-scale and large-scale networks. A CIMD prototype is being built based on the JIAC agent framework(www.jiac.de).
Resumo:
As one of the measures for decreasing road traffic noise in a city, the control of the traffic flow and the physical distribution is considered. To conduct the measure effectively, the model for predicting the traffic flow in the citywide road network is necessary. In this study, the existing model named AVENUE was used as a traffic flow prediction model. The traffic flow model was integrated with the road vehicles' sound power model and the sound propagation model, and the new road traffic noise prediction model was established. As a case study, the prediction model was applied to the road network of Tsukuba city in Japan and the noise map of the city was made. To examine the calculation accuracy of the noise map, the calculated values of the noise at the main roads were compared with the measured values. As a result, it was found that there was a possibility that the high accuracy noise map of the city could be made by using the noise prediction model developed in this study.
Resumo:
The existence of the Macroscopic Fundamental Diagram (MFD), which relates network space-mean density and flow, has been shown in urban networks under homogeneous traffic conditions. Since the MFD represents the area-wide network traffic performances, studies on perimeter control strategies and an area traffic state estimation utilizing the MFD concept has been reported. The key requirements for the well-defined MFD is the homogeneity of the area wide traffic condition, which is not universally expected in real world. For the practical application of the MFD concept, several researchers have identified the influencing factors for network homogeneity. However, they did not explicitly take drivers’ behaviour under real time information provision into account, which has a significant impact on the shape of the MFD. This research aims to demonstrate the impact of drivers’ route choice behaviour on network performance by employing the MFD as a measurement. A microscopic simulation is chosen as an experimental platform. By changing the ratio of en-route informed drivers and pre-trip informed drivers as well as by taking different route choice parameters, various scenarios are simulated in order to investigate how drivers’ adaptation to the traffic congestion influences the network performance and the MFD shape. This study confirmed and addressed the impact of information provision on the MFD shape and highlighted the significance of the route choice parameter setting as an influencing factor in the MFD analysis.
Resumo:
The use of Wireless Sensor Networks (WSNs) for Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data synchronization error and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. This paper first presents a brief review of the most inherent uncertainties of the SHM-oriented WSN platforms and then investigates their effects on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when employing merged data from multiple tests. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and Data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Experimental accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as clean data before being contaminated by different data pollutants in sequential manner to simulate practical SHM-oriented WSN uncertainties. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with SHM-WSN uncertainties. Finally, the use of the measurement channel projection for the time-domain OMA techniques and the preferred combination of the OMA techniques to cope with the SHM-WSN uncertainties is recommended.
Resumo:
Historically a significant gap between male and female wages has existed in the Australian labour market. Indeed this wage differential was institutionalised in the 1912 arbitration decision which determined that the basic female wage would be set at between 54 and 66 per cent of the male wage. More recently however, the 1969 and 1972 Equal Pay Cases determined that male/female wage relativities should be based upon the premise of equal pay for work of equal value. It is important to note that the mere observation that average wages differ between males and females is not sine qua non evidence of sex discrimination. Economists restrict the definition of wage discrimination to cases where two distinct groups receive different average remuneration for reasons unrelated to differences in productivity characteristics. This paper extends previous studies of wage discrimination in Australia (Chapman and Mulvey, 1986; Haig, 1982) by correcting the estimated male/female wage differential for the existence of non-random sampling. Previous Australian estimates of male/female human capital basedwage specifications together with estimates of the corresponding wage differential all suffer from a failure to address this issue. If the sample of females observed to be working does not represent a random sample then the estimates of the male/female wage differential will be both biased and inconsistent.
Resumo:
Effective Wayfinding is the successful interplay of human and environmental factors resulting in a person successfully moving from their current position to a desired location in a timely manner. To date this process has not been modelled to reflect this interplay. This paper proposes a complex modelling system approach of wayfinding by using Bayesian Networks to model this process, and applies the model to airports. The model suggests that human factors have a greater impact on effective wayfinding in airports than environmental factors. The greatest influences on human factors are found to be the level of spatial anxiety experienced by travellers and their cognitive and spatial skills. The model also predicted that the navigation pathway that a traveller must traverse has a larger impact on the effectiveness of an airport’s environment in promoting effective wayfinding than the terminal design.
Resumo:
This paper addresses the ambiguous relationship of internal, organizationa social capital and external social capital with corporate entrepreneurship performance. Drawing on social construction theory we argue that bricolage can mitigate some of the negative effects associated with social capital by recombining and redefining the purpose of available resources. We investigated our hypotheses through a random sample of 206 corporate entrepreneurship projects. We found that both internal and external social capital have no direct effect on performance of corporate entrepreneurship projects. The results indicate that bricolage mediates the relationship between social capital and performance of corporate entrepreneurship projects. Bricolage thrives in particularly when there is wide availability of social capital internal and external to the organization. The implications are that bricolage is a critical behavior in allowing corporate entrepreneur projects to benefit from resources available through their network of social relations inside and outside the company.
Resumo:
Debate about the relationships between business planning and performance has been active for decades (Bhidé, 2000; Mintzberg, 1994). While results have been inconclusive, this topic still strongly divides the research community (Brinckmann et al., 2010; Chwolka & Raith, 2011; Delmar & Shane, 2004; Frese, 2009; Gruber, 2007; Honig & Karlsson, 2004). Previous research explored the relationships between innovation and the venture creation process (Amason et al., 2006, Dewar & Dutton, 1986; Jennings et al., 2009). However, the relationships between business planning and innovation have mostly been invoked indirectly in the strategy and entrepreneurship literatures through the notion of uncertainty surrounding the development of innovation. Some posited that planning may be irrelevant due to the iterative process, the numerous changes innovation development entails and the need to be flexible (Brews & Hunt, 1999). Others suggested that planning may facilitate the achievement of goals and overcoming of obstacles (Locke and Latham, 2000), guide the venture in its allocation of resources (Delmar and Shane, 2003) and help to foster the communication about the innovation being developed (Liao & Welsh, 2008). However, the nature and extents of the relationships between business planning, innovation and performance are still largely unknown. Moreover, if the reasons why ventures should engage (Frese, 2009) –or not- (Honig, 2004) in business planning have been investigated quite extensively (Brinckmann et al., 2010), the specific value of business planning for nascent firms developing innovation is still unclear. The objective of this paper is to shed some light on these important aspects by investigating the two following questions on a large sample of random nascent firms: 1) how is business planning use over time by new ventures developing different types and degrees of innovation? 2) how do business planning and innovation impact the performance of the nascent firms? Methods & Key propositions This PSED-type study draws its data from the first three waves of the CAUSEE project where 30,105 Australian households were randomly contacted by phone using a methodology to capture emerging firms (Davidsson, Steffens, Gordon, Reynolds, 2008). This screening led to the identification of 594 nascent ventures (i.e., firms that were not operating yet at the time of the identification) that were willing to participate in the study. Comprehensive phone interviews were conducted with these 594 ventures. Likewise, two comprehensive follow-ups were organised 12 months and 24 months later where 80% of the eligible cases of the previous wave completed the interview. The questionnaire contains specific sections investigating business plans such as: presence or absence, degree of formality and updates of the plan. Four types of innovation are measured along three degrees of intensity to produce a comprehensive continuous measure ranging from 0 to 12 (Dahlqvist & Wiklund, 2011). Other sections informing on the gestation activities, industry and different types of experiences will be used as controls to measure the relationships and the impacts of business planning and innovation on the performance of nascent firms overtime. Results from two rounds of pre-testing informed the design of the instrument included in the main survey. The three waves of data are used to first test and compare the use of planning amongst nascent firms by their degrees of innovation and then to examine their impact on performance overtime through regression analyses. Results and Implications Three waves of data collection have been completed. Preliminary results show that on average, innovative firms are more likely to have a business plans than their low innovative counterpart. They are also most likely to update their plan suggesting a more continuous use of the plan over time than previously thought. Further analyses regarding the relationships between business planning, innovation and performance are undergoing. This paper is expected to contribute to the literature on business planning and innovation by measuring quantitatively their impact on nascent firms activities and performance at different stages of their development. In addition, this study will shed a new light on the business planning-performance relationship by disentangling plans, types of nascent firms regarding their innovation degres and their performance over time. Finally, we expect to increase the understanding of the venture creation process by analysing those questions on nascent firms from a large longitudinal sample of randomly selected ventures. We acknowledge the results from this study will be preliminary and will have to be interpreted with caution as the business planning-performance is not a straightforward relationship (Brinckmann et al., 2010). Meanwhile, we believe that this study is important to the field of entrepreneurship as it provides some much needed insights on the processes used by nascent firms during their creation and early operating stages.
Resumo:
Since 1 December 2002, the New Zealand Exchange’s (NZX) continuous disclosure listing rules have operated with statutory backing. To test the effectiveness of the new corporate disclosure regime, we compare the change in quantity of market announcements (overall, non-routine, non-procedural and external) released to the NZX before and after the introduction of statutory backing. We also extend our study in investigating whether the effectiveness of the new corporate disclosure regime is diminished or augmented by corporate governance mechanisms including board size, providing separate roles for CEO and Chairman, board independence, board gender diversity and audit committee independence. Our findings provide a qualified support for the effectiveness of the new corporate disclosure regime regarding the quantity of market disclosures. There is strong evidence that the effectiveness of the new corporate disclosure regime was augmented by providing separate roles for CEO and Chairman, board gender diversity and audit committee independence, and diminished by board size. In addition, there is significant evidence that share price queries do impact corporate disclosure behaviour and this impact is significantly influenced by corporate governance mechanisms. Our findings provide important implications for corporate regulators in their quest for...
Resumo:
This paper provides a new general approach for defining coherent generators in power systems based on the coherency in low frequency inter-area modes. The disturbance is considered to be distributed in the network by applying random load changes which is the random walk representation of real loads instead of a single fault and coherent generators are obtained by spectrum analysis of the generators velocity variations. In order to find the coherent areas and their borders in the inter-connected networks, non-generating buses are assigned to each group of coherent generator using similar coherency detection techniques. The method is evaluated on two test systems and coherent generators and areas are obtained for different operating points to provide a more accurate grouping approach which is valid across a range of realistic operating points of the system.
Resumo:
This work experimentally examines the performance benefits of a regional CORS network to the GPS orbit and clock solutions for supporting real-time Precise Point Positioning (PPP). The regionally enhanced GPS precise orbit solutions are derived from a global evenly distributed CORS network added with a densely distributed network in Australia and New Zealand. A series of computational schemes for different network configurations are adopted in the GAMIT-GLOBK and PANDA data processing. The precise GPS orbit results show that the regionally enhanced solutions achieve the overall orbit improvements with respect to the solutions derived from the global network only. Additionally, the orbital differences over GPS satellite arcs that are visible by any of the five Australia-wide CORS stations show a higher percentage of overall improvements compared to the satellite arcs that are not visible from these stations. The regional GPS clock and Uncalibrated Phase Delay (UPD) products are derived using the PANDA real time processing module from Australian CORS networks of 35 and 79 stations respectively. Analysis of PANDA kinematic PPP and kinematic PPP-AR solutions show certain overall improvements in the positioning performance from a denser network configuration after solution convergence. However, the clock and UPD enhancement on kinematic PPP solutions is marginal. It is suggested that other factors, such as effects of ionosphere, incorrectly fixed ambiguities, may be the more dominating, deserving further research attentions.
Resumo:
Strike-slip faults commonly display structurally complex areas of positive or negative topography. Understanding the development of such areas has important implications for earthquake studies and hydrocarbon exploration. Previous workers identified the key factors controlling the occurrence of both topographic modes and the related structural styles. Kinematic and stress boundary conditions are of first-order relevance. Surface mass transport and material properties affect fault network structure. Experiments demonstrate that dilatancy can generate positive topography even under simple-shear boundary conditions. Here, we use physical models with sand to show that the degree of compaction of the deformed rocks alone can determine the type of topography and related surface fault network structure in simple-shear settings. In our experiments, volume changes of ∼5% are sufficient to generate localized uplift or subsidence. We discuss scalability of model volume changes and fault network structure and show that our model fault zones satisfy geometrical similarity with natural flower structures. Our results imply that compaction may be an important factor in the development of topography and fault network structure along strike-slip faults in sedimentary basins.