121 resultados para PACKET MARKING
Resumo:
Early in the practice-led research debate, Steven Scrivener (2000, 2002) identified some general differences in the approach of artists and designers undertaking postgraduate research. His distinctions centered on the role of the artefact in problem-based research (associated with design) and creative-production research (associated with artistic practice). Nonetheless, in broader discussions on practice-led research, 'art and design' often continues to be conflated within a single term. In particular, marked differences between art and design methodologies, theoretical framing, research goals and research claims have been underestimated. This paper revisits Scrivener's work and establishes further distinctions between art and design research. It is informed by our own experiences of postgraduate supervision and research methods training, and an empirical study of over sixty postgraduate, practice-led projects completed at the Creative Industries Faculty of QUT between 2002 and 2008. Our reflections have led us to propose that artists and designers work with differing research goals (the evocative and the effective, respectively), which are played out in the questions asked, the creative process, the role of the artefact and the way new knowledge is evidenced. Of course, research projects will have their own idiosyncrasies but, we argue, marking out the poles at each end of the spectrum of art and design provides useful insights for postgraduate candidates, supervisors and methodologists alike.
Resumo:
The automatic extraction of road features from remote sensed images has been a topic of great interest within the photogrammetric and remote sensing communities for over 3 decades. Although various techniques have been reported in the literature, it is still challenging to efficiently extract the road details with the increasing of image resolution as well as the requirement for accurate and up-to-date road data. In this paper, we will focus on the automatic detection of road lane markings, which are crucial for many applications, including lane level navigation and lane departure warning. The approach consists of four steps: i) data preprocessing, ii) image segmentation and road surface detection, iii) road lane marking extraction based on the generated road surface, and iv) testing and system evaluation. The proposed approach utilized the unsupervised ISODATA image segmentation algorithm, which segments the image into vegetation regions, and road surface based only on the Cb component of YCbCr color space. A shadow detection method based on YCbCr color space is also employed to detect and recover the shadows from the road surface casted by the vehicles and trees. Finally, the lane marking features are detected from the road surface using the histogram clustering. The experiments of applying the proposed method to the aerial imagery dataset of Gympie, Queensland demonstrate the efficiency of the approach.
Resumo:
While the need for teamwork skills consistently appears in job advertisements across all sectors, the development of these skills for many university students (and some academic staff) remains one of the most painful and often complained about experiences. This presentation introduces the final phase of a project that has investigated and analysed the design of teamwork assessment across all discipline areas in order to provide a university-wide protocol for this important graduate capability. The protocol concentrates best practice guidelines and resources across a range of approaches to team assessment and includes an online diagnostic tool for evaluating the quality of assessment design. Guide-lines are provided for all aspects of the design process such as the development of real-world relevance; choosing the ideal team structure; planning for intervention and conflict resolution; and selecting appropriate marking options. While still allowing academic staff to exercise creativity in assessment design; the guidelines increase the possibility of students’ experiencing a consistent and explicit approach to teamwork throughout their course. If implementation of the protocol is successful, the project team predicts that the resulting consistency and explicitness in approaches to teamwork will lead to more coherent skill development across units, more realistic expectations for students and staff and better communication between all those participating in the process.
Resumo:
The wavelet packet transform decomposes a signal into a set of bases for time–frequency analysis. This decomposition creates an opportunity for implementing distributed data mining where features are extracted from different wavelet packet bases and served as feature vectors for applications. This paper presents a novel approach for integrated machine fault diagnosis based on localised wavelet packet bases of vibration signals. The best basis is firstly determined according to its classification capability. Data mining is then applied to extract features and local decisions are drawn using Bayesian inference. A final conclusion is reached using a weighted average method in data fusion. A case study on rolling element bearing diagnosis shows that this approach can greatly improve the accuracy ofdiagno sis.
Resumo:
Rapid advancements in the field of genetic science have engendered considerable debate, speculation, misinformation and legislative action worldwide. While programs such as the Human Genome Project bring the prospect of seemingly miraculous medical advancements within imminent reach, they also create the potential for significant invasions of traditional areas of privacy and human dignity through laying the potential foundation for new forms of discrimination in insurance, employment and immigration regulation. The insurance industry, which has of course, traditionally been premised on discrimination as part of its underwriting process, is proving to be the frontline of this regulatory battle with extensive legislation, guidelines and debate marking its progress.
Resumo:
Monitoring Internet traffic is critical in order to acquire a good understanding of threats to computer and network security and in designing efficient computer security systems. Researchers and network administrators have applied several approaches to monitoring traffic for malicious content. These techniques include monitoring network components, aggregating IDS alerts, and monitoring unused IP address spaces. Another method for monitoring and analyzing malicious traffic, which has been widely tried and accepted, is the use of honeypots. Honeypots are very valuable security resources for gathering artefacts associated with a variety of Internet attack activities. As honeypots run no production services, any contact with them is considered potentially malicious or suspicious by definition. This unique characteristic of the honeypot reduces the amount of collected traffic and makes it a more valuable source of information than other existing techniques. Currently, there is insufficient research in the honeypot data analysis field. To date, most of the work on honeypots has been devoted to the design of new honeypots or optimizing the current ones. Approaches for analyzing data collected from honeypots, especially low-interaction honeypots, are presently immature, while analysis techniques are manual and focus mainly on identifying existing attacks. This research addresses the need for developing more advanced techniques for analyzing Internet traffic data collected from low-interaction honeypots. We believe that characterizing honeypot traffic will improve the security of networks and, if the honeypot data is handled in time, give early signs of new vulnerabilities or breakouts of new automated malicious codes, such as worms. The outcomes of this research include: • Identification of repeated use of attack tools and attack processes through grouping activities that exhibit similar packet inter-arrival time distributions using the cliquing algorithm; • Application of principal component analysis to detect the structure of attackers’ activities present in low-interaction honeypots and to visualize attackers’ behaviors; • Detection of new attacks in low-interaction honeypot traffic through the use of the principal component’s residual space and the square prediction error statistic; • Real-time detection of new attacks using recursive principal component analysis; • A proof of concept implementation for honeypot traffic analysis and real time monitoring.
Resumo:
De Certeau (1984) constructs the notion of belonging as a sentiment which develops over time through the everyday activities. He explains that simple everyday activities are part of the process of appropriation and territorialisation and suggests that over time belonging and attachment are established and built on memory, knowledge and the experiences of everyday activities. Based on the work of de Certeau, non-Indigenous Australians have developed attachment and belonging to places based on the dispossession of Aboriginal people and on their everyday practices over the past two hundred years. During this time non-Indigenous people have marked their appropriation and territorialisation with signs, symbols, representations and images. In marking their attachment, they also define how they position Australia’s Indigenous people by both our presence and our absence. This paper will explore signs and symbols within spaces and places in health services and showcase how they reflect the historical, political, cultural, social and economic values, and power relations of broader society. It will draw on the voices of Aboriginal women to demonstrate their everyday experiences of such sites. It will conclude by highlighting how Aboriginal people assert their identities and un-ceded sovereignty within such health sites and actively resist on-going white epistemological notions of us and the logic of patriarchal white sovereignty.
Resumo:
TCP is a dominant protocol for consistent communication over the internet. It provides flow, congestion and error control mechanisms while using wired reliable networks. Its congestion control mechanism is not suitable for wireless links where data corruption and its lost rate are higher. The physical links are transparent from TCP that takes packet losses due to congestion only and initiates congestion handling mechanisms by reducing transmission speed. This results in wasting already limited available bandwidth on the wireless links. Therefore, there is no use to carry out research on increasing bandwidth of the wireless links until the available bandwidth is not optimally utilized. This paper proposed a hybrid scheme called TCP Detection and Recovery (TCP-DR) to distinguish congestion, corruption and mobility related losses and then instructs the data sending host to take appropriate action. Therefore, the link utilization is optimal while losses are either due to high bit error rate or mobility.
Resumo:
GMPLS is a generalized form of MPLS (MultiProtocol Label Switching). MPLS is IP packet based and it uses MPLS-TE for Packet Traffic Engineering. GMPLS is extension to MPLS capabilities. It provides separation between transmission, control and management plane and network management. Control plane allows various applications like traffic engineering, service provisioning, and differentiated services. GMPLS control plane architecture includes signaling (RSVP-TE, CR-LDP) and routing (OSPF-TE, ISIS-TE) protocols. This paper provides an overview of the signaling protocols, describes their main functionalities, and provides a general evaluation of both the protocols.
Resumo:
This paper introduces an energy-efficient Rate Adaptive MAC (RA-MAC) protocol for long-lived Wireless Sensor Networks (WSN). Previous research shows that the dynamic and lossy nature of wireless communication is one of the major challenges to reliable data delivery in a WSN. RA-MAC achieves high link reliability in such situations by dynamically trading off radio bit rate for signal processing gain. This extra gain reduces the packet loss rate which results in lower energy expenditure by reducing the number of retransmissions. RA-MAC selects the optimal data rate based on channel conditions with the aim of minimizing energy consumption. We have implemented RA-MAC in TinyOS on an off-the-shelf sensor platform (TinyNode), and evaluated its performance by comparing RA-MAC with state-ofthe- art WSN MAC protocol (SCP-MAC) by experiments.
Resumo:
This paper investigates a wireless sensor network deployment - monitoring water quality, e.g. salinity and the level of the underground water table - in a remote tropical area of northern Australia. Our goal is to collect real time water quality measurements together with the amount of water being pumped out in the area, and investigate the impacts of current irrigation practice on the environments, in particular underground water salination. This is a challenging task featuring wide geographic area coverage (mean transmission range between nodes is more than 800 meters), highly variable radio propagations, high end-to-end packet delivery rate requirements, and hostile deployment environments. We have designed, implemented and deployed a sensor network system, which has been collecting water quality and flow measurements, e.g., water flow rate and water flow ticks for over one month. The preliminary results show that sensor networks are a promising solution to deploying a sustainable irrigation system, e.g., maximizing the amount of water pumped out from an area with minimum impact on water quality.
Resumo:
In the past few years, numerous data collection protocols have been developed for wireless sensor networks (WSNs). However, there has been no comparison of their relative performance in realistic environments. Here we report the results of an empirical study using a Fleck3 sensor network testbed for four different data collection protocols: One phase pull Directed Diffusion (DD), Expected Number of Transmissions (ETX), ETX with explicit acknowledgment (ETX-eAck), and ETX with implicit acknowledgment (ETX-iAck). Our empirical study provides useful insights for future sensor network deployments. When the required application end-to-end reliability is not strict (e.g., 70%) and link quality is good, DD and ETX are the best options because of their simplicity and low routing overhead. Both ETX-eAck and ETX-iAck achieve more than 90% end-to-end reliability when the link quality is reasonable (less than 25% packet loss). When the link quality is good, ETX-iAck introduces significantly less routing overhead (up to 50%) than ETX-eAck. However, if the radio transceiver supports variable packet length, ETX-eAck can outperform ETX-iAck when the link quality is poor. The important message from this paper is that choice of data collection protocol should come after the operating environment is understood. This understanding must include the characteristics of the radio transceiver, and link loss statistics from a long-term (across seasons and weather variation) radio survey of the site.
Resumo:
High-rate flooding attacks (aka Distributed Denial of Service or DDoS attacks) continue to constitute a pernicious threat within the Internet domain. In this work we demonstrate how using packet source IP addresses coupled with a change-point analysis of the rate of arrival of new IP addresses may be sufficient to detect the onset of a high-rate flooding attack. Importantly, minimizing the number of features to be examined, directly addresses the issue of scalability of the detection process to higher network speeds. Using a proof of concept implementation we have shown how pre-onset IP addresses can be efficiently represented using a bit vector and used to modify a “white list” filter in a firewall as part of the mitigation strategy.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.