194 resultados para sicurezza, exploit, XSS, Beef, browser
Resumo:
Diversity techniques have long been used to combat the channel fading in wireless communications systems. Recently cooperative communications has attracted lot of attention due to many benefits it offers. Thus cooperative routing protocols with diversity transmission can be developed to exploit the random nature of the wireless channels to improve the network efficiency by selecting multiple cooperative nodes to forward data. In this paper we analyze and evaluate the performance of a novel routing protocol with multiple cooperative nodes which share multiple channels. Multiple shared channels cooperative (MSCC) routing protocol achieves diversity advantage by using cooperative transmission. It unites clustering hierarchy with a bandwidth reuse scheme to mitigate the co-channel interference. Theoretical analysis of average packet reception rate and network throughput of the MSCC protocol are presented and compared with simulated results.
Resumo:
A fundamental principle of the resource-based (RBV) of the firm is that the basis for a competitive advantage lies primarily in the application of bundles of valuable strategic capabilities and resources at a firm’s or supply chain’s disposal. These capabilities enact research activities and outputs produced by industry funded R&D bodies. Such industry lead innovations are seen as strategic industry resources, because effective utilization of industry innovation capacity by sectors such as the Australian beef industry are critical, if productivity levels are to increase. Academics and practitioners often maintain that dynamic supply chains and innovation capacity are the mechanisms most likely to deliver performance improvements in national industries.. Yet many industries are still failing to capitalise on these strategic resources. In this research, we draw on the resource-based view (RBV) and embryonic research into strategic supply chain capabilities. We investigate how two strategic supply chain capabilities (supply chain performance differential capability and supply chain dynamic capability) influence industry-led innovation capacity utilization and provide superior performance enhancements to the supply chain. In addition, we examine the influence of size of the supply chain operative as a control variable. Results indicate that both small and large supply chain operatives in this industry believe these strategic capabilities influence and function as second-order latent variables of this strategic supply chain resource. Additionally respondents acknowledge size does impacts both the amount of influence these strategic capabilities have and the level of performance enhancement expected by supply chain operatives from utilizing industry-led innovation capacity. Results however also indicate contradiction in this industry and in relation to existing literature when it comes to utilizing such e-resources.
Resumo:
For more than a decade research in the field of context aware computing has aimed to find ways to exploit situational information that can be detected by mobile computing and sensor technologies. The goal is to provide people with new and improved applications, enhanced functionality and better use experience (Dey, 2001). Early applications focused on representing or computing on physical parameters, such as showing your location and the location of people or things around you. Such applications might show where the next bus is, which of your friends is in the vicinity and so on. With the advent of social networking software and microblogging sites such as Facebook and Twitter, recommender systems and so on context-aware computing is moving towards mining the social web in order to provide better representations and understanding of context, including social context. In this paper we begin by recapping different theoretical framings of context. We then discuss the problem of context- aware computing from a design perspective.
Resumo:
Accurate and detailed road models play an important role in a number of geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance systems. In this thesis, an integrated approach for the automatic extraction of precise road features from high resolution aerial images and LiDAR point clouds is presented. A framework of road information modeling has been proposed, for rural and urban scenarios respectively, and an integrated system has been developed to deal with road feature extraction using image and LiDAR analysis. For road extraction in rural regions, a hierarchical image analysis is first performed to maximize the exploitation of road characteristics in different resolutions. The rough locations and directions of roads are provided by the road centerlines detected in low resolution images, both of which can be further employed to facilitate the road information generation in high resolution images. The histogram thresholding method is then chosen to classify road details in high resolution images, where color space transformation is used for data preparation. After the road surface detection, anisotropic Gaussian and Gabor filters are employed to enhance road pavement markings while constraining other ground objects, such as vegetation and houses. Afterwards, pavement markings are obtained from the filtered image using the Otsu's clustering method. The final road model is generated by superimposing the lane markings on the road surfaces, where the digital terrain model (DTM) produced by LiDAR data can also be combined to obtain the 3D road model. As the extraction of roads in urban areas is greatly affected by buildings, shadows, vehicles, and parking lots, we combine high resolution aerial images and dense LiDAR data to fully exploit the precise spectral and horizontal spatial resolution of aerial images and the accurate vertical information provided by airborne LiDAR. Objectoriented image analysis methods are employed to process the feature classiffcation and road detection in aerial images. In this process, we first utilize an adaptive mean shift (MS) segmentation algorithm to segment the original images into meaningful object-oriented clusters. Then the support vector machine (SVM) algorithm is further applied on the MS segmented image to extract road objects. Road surface detected in LiDAR intensity images is taken as a mask to remove the effects of shadows and trees. In addition, normalized DSM (nDSM) obtained from LiDAR is employed to filter out other above-ground objects, such as buildings and vehicles. The proposed road extraction approaches are tested using rural and urban datasets respectively. The rural road extraction method is performed using pan-sharpened aerial images of the Bruce Highway, Gympie, Queensland. The road extraction algorithm for urban regions is tested using the datasets of Bundaberg, which combine aerial imagery and LiDAR data. Quantitative evaluation of the extracted road information for both datasets has been carried out. The experiments and the evaluation results using Gympie datasets show that more than 96% of the road surfaces and over 90% of the lane markings are accurately reconstructed, and the false alarm rates for road surfaces and lane markings are below 3% and 2% respectively. For the urban test sites of Bundaberg, more than 93% of the road surface is correctly reconstructed, and the mis-detection rate is below 10%.
Resumo:
Establishing a persistent presence in the ocean with an AUV to observe temporal variability of large-scale ocean processes requires a unique sensor platform. In this paper, we propose a strategy that utilizes ocean model predictions to increase the autonomy and control of Lagrangian or profiling floats for precisely this purpose. An A* planner is applied to a local controllability map generated from predictions of ocean currents to compute a path between prescribed waypoints that has the highest likelihood of successful execution. The control to follow the planned path is computed by use of a model predictive controller. This controller is designed to select the best depth for the vehicle to exploit ambient currents to reach the goal waypoint. Mission constraints are employed to simulate a practical data collection mission. Results are presented in simulation for a mission off the coast of Los Angeles, CA USA, and show surprising results in the ability of a Lagrangian float to reach a desired location.
Resumo:
In Australian Meat Holdings Pty Ltd v Sayers [2007] QSC 390 Daubney J considered the obligation imposed on a claimant under s 275 of the Workers’ Compensation and Rehabilitation Act 2003 (Qld) to provide the insurer with an authority to obtain information and documents. The decision leads to practical results.
Resumo:
The notion of territorial strategy emerged in the 1990s and has become more and more popular since. It refers to that combination of factors purposely assembled by governments, private and public companies, universities, and industrial associations to exploit a specific geographic competitive advantage in order to boost economic growth through the development of entrepreneurial activity and innovation. Three factors are generally considered to be the building blocks of a territorial strategy: natural resources, human capital, and industrial capabilities. Natural resources derive from environ mental conditions and represent raw materials or land available in a region. The presence of natural resources characterizes the typology of an industry (related to tourism, oil, wood, fish, and so forth) that exists or could exist in a certain area. Human capital refers to the stock of competences available in a certain region resulting from education and work experience. Industrial capabilities relate to complex constructs of specialized expertise, the confidence to apply knowledge and skills in various contexts and under changing conditions, and an ability repeatedly to improve methods and processes in a specific industry.
Resumo:
Security cues found in web browsers are meant to alert users to potential online threats, yet many studies demonstrate that security indicators are largely ineffective in this regard. Those studies have depended upon self-reporting of subjects' use or aggregate experimentation that correlate responses to sites with and without indicators. We report on a laboratory experiment using eye-tracking to follow the behavior of self-identified computer experts as they share information across popular social media websites. The use of eye-tracking equipment allows us to explore possible behavioral differences in the way experts perceive web browser security cues, as opposed to non-experts. Unfortunately, due to the use of self-identified experts, technological issues with the setup, and demographic anomalies, our results are inconclusive. We describe our initial experimental design, lessons learned in our experimentation, and provide a set of steps for others to follow in implementing experiments using unfamiliar technologies, eye-tracking specifically, subjects with different experience with the laboratory tasks, as well as individuals with varying security expertise. We also discuss recruitment and how our design will address the inherent uncertainties in recruitment, as opposed to design for an ideal population. Some of these modifications are generalizable, together they will allow us to run a larger 2x2 study, rather than a study of only experts using two different single sign-on systems.
Resumo:
Traditionally, Science education has stressed the importance of teaching students to conduct ‘scientific inquiry’, with the main focus being the experimental model of inquiry used by real world scientists. Current educational approaches using constructivist pedagogy recognise the value of inquiry as a method for promoting the development of deep understanding of discipline content. A recent Information Learning Activity undertaken by a Grade Eight Science class was observed to discover how inquiry based learning is implemented in contemporary Science education. By analysing student responses to questionnaires and assessment task outcomes, the author was able to determine the level of inquiry inherent in the activity and how well the model supported student learning and the development of students’ information literacy skills. Although students achieved well overall, some recommendations are offered that may enable teachers to better exploit the learning opportunities provided by inquiry based learning. Planning interventions at key stages of the inquiry process can assist students to learn more effective strategies for dealing with cognitive and affective challenges. Allowing students greater input into the selection of topic or focus of the activity may encourage students to engage more deeply with the learning task. Students are likely to experience greater learning benefit from access to developmentally appropriate resources, increased time to explore topics and multiple opportunities to undertake information searches throughout the learning activity. Finally, increasing the cognitive challenge can enhance both the depth of students’ learning and their information literacy skills.
Resumo:
Vehicular Ad-hoc Networks (VANET) have different characteristics compared to other mobile ad-hoc networks. The dynamic nature of the vehicles which act as routers and clients are connected with unreliable radio links and Routing becomes a complex problem. First we propose CO-GPSR (Cooperative GPSR), an extension of the traditional GPSR (Greedy Perimeter Stateless Routing) which uses relay nodes which exploit radio path diversity in a vehicular network to increase routing performance. Next we formulate a Multi-objective decision making problem to select optimum packet relaying nodes to increase the routing performance further. We use cross layer information for the optimization process. We evaluate the routing performance more comprehensively using realistic vehicular traces and a Nakagami fading propagation model optimized for highway scenarios in VANETs. Our results show that when Multi-objective decision making is used for cross layer optimization of routing a 70% performance increment can be obtained for low vehicle densities on average, which is a two fold increase compared to the single criteria maximization approach.
Resumo:
Although the drivers of innovation have been studied extensively in construction, greater attention is required on how innovation diffusion can be effectively assessed within this complex and interdependent project-based industry. The authors draw on a highly cited innovation diffusion model by Rogers (2006) and develop a tailored conceptual framework to guide future empirical work aimed at assessing innovation diffusion in construction. The conceptual framework developed and discussed in this paper supports a five-stage process model of innovation diffusion namely: 1) knowledge and idea generation, 2) persuasion and evaluation; 3) decision to adopt, 4) integration and implementation, and 5) confirmation. As its theoretical contribution, this paper proposes three critical measurements constructs which can be used to assess the effectiveness of the diffusion process. These measurement constructs comprise: 1) nature and introduction of an innovative idea, 2) organizational capacity to acquire, assimilate, transform and exploit an innovation, and 3) rates of innovation facilitation and adoption. The constructs are interpreted in the project-based context of the construction industry, extending the contribution of general management theorists. Research planned by the authors will test the validity and reliability of the constructs developed in this paper.
Resumo:
Modern applications comprise multiple components, such as browser plug-ins, often of unknown provenance and quality. Statistics show that failure of such components accounts for a high percentage of software faults. Enabling isolation of such fine-grained components is therefore necessary to increase the robustness and resilience of security-critical and safety-critical computer systems. In this paper, we evaluate whether such fine-grained components can be sandboxed through the use of the hardware virtualization support available in modern Intel and AMD processors. We compare the performance and functionality of such an approach to two previous software based approaches. The results demonstrate that hardware isolation minimizes the difficulties encountered with software based approaches, while also reducing the size of the trusted computing base, thus increasing confidence in the solution's correctness. We also show that our relatively simple implementation has equivalent run-time performance, with overheads of less than 34%, does not require custom tool chains and provides enhanced functionality over software-only approaches, confirming that hardware virtualization technology is a viable mechanism for fine-grained component isolation.
Resumo:
Security indicators in web browsers alert users to the presence of a secure connection between their computer and a web server; many studies have shown that such indicators are largely ignored by users in general. In other areas of computer security, research has shown that technical expertise can decrease user susceptibility to attacks. In this work, we examine whether computer or security expertise affects use of web browser security indicators. Our study takes place in the context of web-based single sign-on, in which a user can use credentials from a single identity provider to login to many relying websites; single sign-on is a more complex, and hence more difficult, security task for users. In our study, we used eye trackers and surveyed participants to examine the cues individuals use and those they report using, respectively. Our results show that users with security expertise are more likely to self-report looking at security indicators, and eye-tracking data shows they have longer gaze duration at security indicators than those without security expertise. However, computer expertise alone is not correlated with recorded use of security indicators. In survey questions, neither experts nor novices demonstrate a good understanding of the security consequences of web-based single sign-on.
Resumo:
Automated airborne collision-detection systems are a key enabling technology for facilitat- ing the integration of unmanned aerial vehicles (UAVs) into the national airspace. These safety-critical systems must be sensitive enough to provide timely warnings of genuine air- borne collision threats, but not so sensitive as to cause excessive false-alarms. Hence, an accurate characterisation of detection and false alarm sensitivity is essential for understand- ing performance trade-offs, and system designers can exploit this characterisation to help achieve a desired balance in system performance. In this paper we experimentally evaluate a sky-region, image based, aircraft collision detection system that is based on morphologi- cal and temporal processing techniques. (Note that the examined detection approaches are not suitable for the detection of potential collision threats against a ground clutter back- ground). A novel collection methodology for collecting realistic airborne collision-course target footage in both head-on and tail-chase engagement geometries is described. Under (hazy) blue sky conditions, our proposed system achieved detection ranges greater than 1540m in 3 flight test cases with no false alarm events in 14.14 hours of non-target data (under cloudy conditions, the system achieved detection ranges greater than 1170m in 4 flight test cases with no false alarm events in 6.63 hours of non-target data). Importantly, this paper is the first documented presentation of detection range versus false alarm curves generated from airborne target and non-target image data.
Resumo:
BACKGROUND: Effective management of chronic diseases such as prostate cancer is important. Research suggests a tendency to use self-care treatment options such as over-the-counter (OTC) complementary medications among prostate cancer patients. The current trend in patient-driven recording of health data in an online Personal Health Record (PHR) presents an opportunity to develop new data-driven approaches for improving prostate cancer patient care. However, the ability of current online solutions to share patients' data for better decision support is limited. An informatics approach may improve online sharing of self-care interventions among these patients. It can also provide better evidence to support decisions made during their self-managed care. AIMS: To identify requirements for an online system and describe a new case-based reasoning (CBR) method for improving self-care of advanced prostate cancer patients in an online PHR environment. METHOD: A non-identifying online survey was conducted to understand self-care patterns among prostate cancer patients and to identify requirements for an online information system. The pilot study was carried out between August 2010 and December 2010. A case-base of 52 patients was developed. RESULTS: The data analysis showed self-care patterns among the prostate cancer patients. Selenium (55%) was the common complementary supplement used by the patients. Paracetamol (about 45%) was the commonly used OTC by the patients. CONCLUSION: The results of this study specified requirements for an online case-based reasoning information system. The outcomes of this study are being incorporated in design of the proposed Artificial Intelligence (Al) driven patient journey browser system. A basic version of the proposed system is currently being considered for implementation.