846 resultados para Open Information Extraction
Resumo:
The automatic extraction of road features from remote sensed images has been a topic of great interest within the photogrammetric and remote sensing communities for over 3 decades. Although various techniques have been reported in the literature, it is still challenging to efficiently extract the road details with the increasing of image resolution as well as the requirement for accurate and up-to-date road data. In this paper, we will focus on the automatic detection of road lane markings, which are crucial for many applications, including lane level navigation and lane departure warning. The approach consists of four steps: i) data preprocessing, ii) image segmentation and road surface detection, iii) road lane marking extraction based on the generated road surface, and iv) testing and system evaluation. The proposed approach utilized the unsupervised ISODATA image segmentation algorithm, which segments the image into vegetation regions, and road surface based only on the Cb component of YCbCr color space. A shadow detection method based on YCbCr color space is also employed to detect and recover the shadows from the road surface casted by the vehicles and trees. Finally, the lane marking features are detected from the road surface using the histogram clustering. The experiments of applying the proposed method to the aerial imagery dataset of Gympie, Queensland demonstrate the efficiency of the approach.
Resumo:
With the increasing resolution of remote sensing images, road network can be displayed as continuous and homogeneity regions with a certain width rather than traditional thin lines. Therefore, road network extraction from large scale images refers to reliable road surface detection instead of road line extraction. In this paper, a novel automatic road network detection approach based on the combination of homogram segmentation and mathematical morphology is proposed, which includes three main steps: (i) the image is classified based on homogram segmentation to roughly identify the road network regions; (ii) the morphological opening and closing is employed to fill tiny holes and filter out small road branches; and (iii) the extracted road surface is further thinned by a thinning approach, pruned by a proposed method and finally simplified with Douglas-Peucker algorithm. Lastly, the results from some QuickBird images and aerial photos demonstrate the correctness and efficiency of the proposed process.
Resumo:
Accurate road lane information is crucial for advanced vehicle navigation and safety applications. With the increasing of very high resolution (VHR) imagery of astonishing quality provided by digital airborne sources, it will greatly facilitate the data acquisition and also significantly reduce the cost of data collection and updates if the road details can be automatically extracted from the aerial images. In this paper, we proposed an effective approach to detect road lanes from aerial images with employment of the image analysis procedures. This algorithm starts with constructing the (Digital Surface Model) DSM and true orthophotos from the stereo images. Next, a maximum likelihood clustering algorithm is used to separate road from other ground objects. After the detection of road surface, the road traffic and lane lines are further detected using texture enhancement and morphological operations. Finally, the generated road network is evaluated to test the performance of the proposed approach, in which the datasets provided by Queensland department of Main Roads are used. The experiment result proves the effectiveness of our approach.
Resumo:
This volume examines the social, cultural, and political implications of the shift from traditional forms of print-based libraries to the delivery of online information in educational contexts. Despite the central role of libraries in literacy and learning, research of them has, in the main, remained isolated within the disciplinary boundaries of information and library science. By contrast, this book problematizes and thereby mainstreams the field. It brings together scholars from a wide range of academic fields to explore the dislodging of library discourse from its longstanding apolitical, modernist paradigm. Collectively, the authors interrogate the presuppositions of current library practice and examine how library as place and library as space blend together in ways that may be both complementary and contradictory. Seeking a suitable term to designate this rapidly evolving and much contested development, the editors devised the word “libr@ary,” and use the term arobase to signify the conditions of formation of new libraries within contexts of space, knowledge, and capital.
Resumo:
On the back of the growing capacity of networked digital information technologies to process and visualise large amounts of information in a timely, efficient and user-driven manner we have seen an increasing demand for better access to and re-use of public sector information (PSI). The story is not a new one. Share knowledge and together we can do great things; limit access and we reduce the potential for opportunity. The two volumes of this book seek to explain and analyse this global shift in the way we manage public sector information. In doing so they collect and present papers, reports and submissions on the topic by leading authors and institutions from across the world. These in turn provide people tasked with mapping out and implementing information policy with reference material and practical guidance. Volume 1 draws together papers on the topic by policymakers, academics and practitioners while Volume 2 presents a selection of the key reports and submissions that have been published over the last few years.
Resumo:
Throughout history, developments in medicine have aimed to improve patient quality of life, and reduce the trauma associated with surgical treatment. Surgical access to internal organs and bodily structures has been traditionally via large incisions. Endoscopic surgery presents a technique for surgical access via small (1 Omm) incisions by utilising a scope and camera for visualisation of the operative site. Endoscopy presents enormous benefits for patients in terms of lower post operative discomfort, and reduced recovery and hospitalisation time. Since the first gall bladder extraction operation was performed in France in 1987, endoscopic surgery has been embraced by the international medical community. With the adoption of the new technique, new problems never previously encountered in open surgery, were revealed. One such problem is that the removal of large tissue specimens and organs is restricted by the small incision size. Instruments have been developed to address this problem however none of the devices provide a totally satisfactory solution. They have a number of critical weaknesses: -The size of the access incision has to be enlarged, thereby compromising the entire endoscopic approach to surgery. - The physical quality of the specimen extracted is very poor and is not suitable to conduct the necessary post operative pathological examinations. -The safety of both the patient and the physician is jeopardised. The problem of tissue and organ extraction at endoscopy is investigated and addressed. In addition to background information covering endoscopic surgery, this thesis describes the entire approach to the design problem, and the steps taken before arriving at the final solution. This thesis contributes to the body of knowledge associated with the development of endoscopic surgical instruments. A new product capable of extracting large tissue specimens and organs in endoscopy is the final outcome of the research.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
Level crossing crashes have been shown to result in enormous human and financial cost to society. According to the Australian Transport Safety Bureau (ATSB) [5] a total of 632 Railway Level crossing (RLX) collisions, between trains and road vehicles, occurred in Australia between 2001 and June 2009. The cost of RLX collisions runs into the tens of millions of dollars each year in Australia [6]. In addition, loss of life and injury are commonplace in instances where collisions occur. Based on estimates that 40% of rail related fatalities occur at level crossings [12], it is estimated that 142 deaths between 2001 and June 2009 occurred at RLX. The aim of this paper is to (i) summarise crash patterns in Australia, (ii) review existing international ITS interventions to improve level crossing and (iii) highlights open human factors research related issues. Human factors (e.g., driver error, lapses or violations) have been evidenced as a significant contributing factor in RLX collisions, with drivers of road vehicles particularly responsible for many collisions. Unintentional errors have been found to contribute to 46% of RLX collisions [6] and appear to be far more commonplace than deliberate violations. Humans have been found to be inherently inadequate at using the sensory information available to them to facilitate safe decision-making at RLX and tend to underestimate the speed of approaching large objects due to the non-linear increases in perceived size [6]. Collisions resulting from misjudgements of train approach speed and distance are common [20]. Thus, a fundamental goal for improved RLX safety is the provision of sufficient contextual information to road vehicle drivers to facilitate safe decision-making regarding crossing behaviours.
Resumo:
INTRODUCTION: Since the introduction of its QUT ePrints institutional repository of published research outputs, together with the world’s first mandate for author contributions to an institutional repository, Queensland University of Technology (QUT) has been a leader in support of green road open access. With QUT ePrints providing our mechanism for supporting the green road to open access, QUT has since then also continued to expand its secondary open access strategy supporting gold road open access, which is also designed to assist QUT researchers to maximise the accessibility and so impact of their research. ---------- METHODS: QUT Library has adopted the position of selectively supporting true gold road open access publishing by using the Library Resource Allocation budget to pay the author publication fees for QUT authors wishing to publish in the open access journals of a range of publishers including BioMed Central, Public Library of Science and Hindawi. QUT Library has been careful to support only true open access publishers and not those open access publishers with hybrid models which “double dip” by charging authors publication fees and libraries subscription fees for the same journal content. QUT Library has maintained a watch on the growing number of open access journals available from gold road open access publishers and their increased rate of success as measured by publication impact. ---------- RESULTS: This paper reports on the successes and challenges of QUT’s efforts to support true gold road open access publishers and promote these publishing strategy options to researchers at QUT. The number and spread of QUT papers submitted and published in the journals of each publisher is provided. Citation counts for papers and authors are also presented and analysed, with the intention of identifying the benefits to accessibility and research impact for early career and established researchers.---------- CONCLUSIONS: QUT Library is eager to continue and further develop support for this publishing strategy, and makes a number of recommendations to other research institutions, on how they can best achieve success with this strategy.
Resumo:
This paper argues a model of open system design for sustainable architecture, based on a thermodynamics framework of entropy as an evolutionary paradigm. The framework can be simplified to stating that an open system evolves in a non-linear pattern from a far-from-equilibrium state towards a non-equilibrium state of entropy balance, which is a highly ordered organization of the system when order comes out of chaos. This paper is work in progress on a PhD research project which aims to propose building information modelling for optimization and adaptation of buildings environmental performance as an alternative sustainable design program in architecture. It will be used for efficient distribution and consumption of energy and material resource in life-cycle buildings, with the active involvement of the end-users and the physical constraints of the natural environment.
Resumo:
"This column is distinguished from previous Impact columns in that it concerns the development tightrope between research and commercial take-up and the role of the LGPL in an open source workflow toolkit produced in a University environment. Many ubiquitous systems have followed this route, (Apache, BSD Unix, ...), and the lessons this Service Oriented Architecture produces cast yet more light on how software diffuses out to impact us all." Michiel van Genuchten and Les Hatton Workflow management systems support the design, execution and analysis of business processes. A workflow management system needs to guarantee that work is conducted at the right time, by the right person or software application, through the execution of a workflow process model. Traditionally, there has been a lack of broad support for a workflow modeling standard. Standardization efforts proposed by the Workflow Management Coalition in the late nineties suffered from limited support for routing constructs. In fact, as later demonstrated by the Workflow Patterns Initiative (www.workflowpatterns.com), a much wider range of constructs is required when modeling realistic workflows in practice. YAWL (Yet Another Workflow Language) is a workflow language that was developed to show that comprehensive support for the workflow patterns is achievable. Soon after its inception in 2002, a prototype system was built to demonstrate that it was possible to have a system support such a complex language. From that initial prototype, YAWL has grown into a fully-fledged, open source workflow management system and support environment
Resumo:
This paper argues a model of open systems evolution based on evolutionary thermodynamics and complex system science, as a design paradigm for sustainable architecture. The mechanism of open system evolution is specified in mathematical simulations and theoretical discourses. According to the mechanism, the authors propose an intelligent building model of sustainable design by a holistic information system of the end-users, the building and nature. This information system is used to control the consumption of energy and material resources in building system at microscopic scale, to adapt the environmental performance of the building system to the natural environment at macroscopic scale, for an evolutionary emergence of sustainable performance of buildings.
Resumo:
Background: There has been a significant increase in the availability of online programs for alcohol problems. A systematic review of the research evidence underpinning these programs is timely. Objectives: Our objective was to review the efficacy of online interventions for alcohol misuse. Systematic searches of Medline, PsycINFO, Web of Science, and Scopus were conducted for English abstracts (excluding dissertations) published from 1998 onward. Search terms were: (1) Internet, Web*; (2) online, computer*; (3) alcohol*; and (4) E\effect*, trial*, random* (where * denotes a wildcard). Forward and backward searches from identified papers were also conducted. Articles were included if (1) the primary intervention was delivered and accessed via the Internet, (2) the intervention focused on moderating or stopping alcohol consumption, and (3) the study was a randomized controlled trial of an alcohol-related screen, assessment, or intervention. Results: The literature search initially yielded 31 randomized controlled trials (RCTs), 17 of which met inclusion criteria. Of these 17 studies, 12 (70.6%) were conducted with university students, and 11 (64.7%) specifically focused on at-risk, heavy, or binge drinkers. Sample sizes ranged from 40 to 3216 (median 261), with 12 (70.6%) studies predominantly involving brief personalized feedback interventions. Using published data, effect sizes could be extracted from 8 of the 17 studies. In relation to alcohol units per week or month and based on 5 RCTs where a measure of alcohol units per week or month could be extracted, differential effect sizes to post treatment ranged from 0.02 to 0.81 (mean 0.42, median 0.54). Pre-post effect sizes for brief personalized feedback interventions ranged from 0.02 to 0.81, and in 2 multi-session modularized interventions, a pre-post effect size of 0.56 was obtained in both. Pre-post differential effect sizes for peak blood alcohol concentrations (BAC) ranged from 0.22 to 0.88, with a mean effect size of 0.66. Conclusions: The available evidence suggests that users can benefit from online alcohol interventions and that this approach could be particularly useful for groups less likely to access traditional alcohol-related services, such as women, young people, and at-risk users. However, caution should be exercised given the limited number of studies allowing extraction of effect sizes, the heterogeneity of outcome measures and follow-up periods, and the large proportion of student-based studies. More extensive RCTs in community samples are required to better understand the efficacy of specific online alcohol approaches, program dosage, the additive effect of telephone or face-to-face interventions, and effective strategies for their dissemination and marketing.