156 resultados para Window gardening.
Resumo:
Health complaint statistics are important for identifying problems and bringing about improvements to health care provided by health service providers and to the wider health care system. This paper overviews complaints handling by the eight Australian state and territory health complaint entities, based on an analysis of data from their annual reports. The analysis shows considerable variation between jurisdictions in the ways complaint data are defined, collected and recorded. Complaints from the public are an important accountability mechanism and open a window on service quality. The lack of a national approach leads to fragmentation of complaint data and a lost opportunity to use national data to assist policy development and identify the main areas causing consumers to complain. We need a national approach to complaints data collection in order to better respond to patients’ concerns.
Resumo:
Deterministic transit capacity analysis applies to planning, design and operational management of urban transit systems. The Transit Capacity and Quality of Service Manual (1) and Vuchic (2, 3) enable transit performance to be quantified and assessed using transit capacity and productive capacity. This paper further defines important productive performance measures of an individual transit service and transit line. Transit work (p-km) captures the transit task performed over distance. Passenger transmission (p-km/h) captures the passenger task delivered by service at speed. Transit productiveness (p-km/h) captures transit work performed over time. These measures are useful to operators in understanding their services’ or systems’ capabilities and passenger quality of service. This paper accounts for variability in utilized demand by passengers along a line and high passenger load conditions where passenger pass-up delay occurs. A hypothetical case study of an individual bus service’s operation demonstrates the usefulness of passenger transmission in comparing existing and growth scenarios. A hypothetical case study of a bus line’s operation during a peak hour window demonstrates the theory’s usefulness in examining the contribution of individual services to line productive performance. Scenarios may be assessed using this theory to benchmark or compare lines and segments, conditions, or consider improvements.
Resumo:
The rank transform is one non-parametric transform which has been applied to the stereo matching problem The advantages of this transform include its invariance to radio metric distortion and its amenability to hardware implementation. This paper describes the derivation of the rank constraint for matching using the rank transform Previous work has shown that this constraint was capable of resolving ambiguous matches thereby improving match reliability A new matching algorithm incorporating this constraint was also proposed. This paper extends on this previous work by proposing a matching algorithm which uses a dimensional match surface in which the match score is computed for every possible template and match window combination. The principal advantage of this algorithm is that the use of the match surface enforces the left�right consistency and uniqueness constraints thus improving the algorithms ability to remove invalid matches Experimental results for a number of test stereo pairs show that the new algorithm is capable of identifying and removing a large number of in incorrect matches particularly in the case of occlusions
Resumo:
A fundamental problem faced by stereo matching algorithms is the matching or correspondence problem. A wide range of algorithms have been proposed for the correspondence problem. For all matching algorithms, it would be useful to be able to compute a measure of the probability of correctness, or reliability of a match. This paper focuses in particular on one class for matching algorithms, which are based on the rank transform. The interest in these algorithms for stereo matching stems from their invariance to radiometric distortion, and their amenability to fast hardware implementation. This work differs from previous work in that it derives, from first principles, an expression for the probability of a correct match. This method was based on an enumeration of all possible symbols for matching. The theoretical results for disparity error prediction, obtained using this method, were found to agree well with experimental results. However, disadvantages of the technique developed in this chapter are that it is not easily applicable to real images, and also that it is too computationally expensive for practical window sizes. Nevertheless, the exercise provides an interesting and novel analysis of match reliability.
Resumo:
Architecture Post Mortem surveys architecture’s encounter with death, decline, and ruination following late capitalism. As the world moves closer to an economic abyss that many perceive to be the death of capital, contraction and crisis are no longer mere phases of normal market fluctuations, but rather the irruption of the unconscious of ideology itself. Post mortem is that historical moment wherein architecture’s symbolic contract with capital is put on stage, naked to all. Architecture is not irrelevant to fiscal and political contagion as is commonly believed; it is the victim and penetrating analytical agent of the current crisis. As the very apparatus for modernity’s guilt and unfulfilled drives-modernity’s debt-architecture is that ideological element that functions as a master signifier of its own destruction, ordering all other signifiers and modes of signification beneath it. It is under these conditions that architecture theory has retreated to an “Alamo” of history, a final desert outpost where history has been asked to transcend itself. For architecture’s hoped-for utopia always involves an apocalypse. This timely collection of essays reformulates architecture’s relation to modernity via the operational death-drive: architecture is but a passage between life and death. This collection includes essays by Kazi K. Ashraf, David Bertolini, Simone Brott, Peggy Deamer, Didem Ekici, Paul Emmons, Donald Kunze, Todd McGowan, Gevork Hartoonian, Nadir Lahiji, Erika Naginski, and Dennis Maher. Contents: Introduction: ‘the way things are’, Donald Kunze; Driven into the public: the psychic constitution of space, Todd McGowan; Dead or alive in Joburg, Simone Brott; Building in-between the two deaths: a post mortem manifesto, Nadir Lahiji; Kant, Sade, ethics and architecture, David Bertolini; Post mortem: building deconstruction, Kazi K. Ashraf; The slow-fast architecture of love in the ruins, Donald Kunze; Progress: re-building the ruins of architecture, Gevork Hartoonian; Adrian Stokes: surface suicide, Peggy Deamer; A window to the soul: depth in the early modern section drawing, Paul Emmons; Preliminary thoughts on Piranesi and Vico, Erika Naginski; architectural asceticism and austerity, Didem Ekici; 900 miles to Paradise, and other afterlives of architecture, Dennis Maher; Index.
Resumo:
This paper presents a novel technique for segmenting an audio stream into homogeneous regions according to speaker identities, background noise, music, environmental and channel conditions. Audio segmentation is useful in audio diarization systems, which aim to annotate an input audio stream with information that attributes temporal regions of the audio into their specific sources. The segmentation method introduced in this paper is performed using the Generalized Likelihood Ratio (GLR), computed between two adjacent sliding windows over preprocessed speech. This approach is inspired by the popular segmentation method proposed by the pioneering work of Chen and Gopalakrishnan, using the Bayesian Information Criterion (BIC) with an expanding search window. This paper will aim to identify and address the shortcomings associated with such an approach. The result obtained by the proposed segmentation strategy is evaluated on the 2002 Rich Transcription (RT-02) Evaluation dataset, and a miss rate of 19.47% and a false alarm rate of 16.94% is achieved at the optimal threshold.
Resumo:
This paper presents an efficient face detection method suitable for real-time surveillance applications. Improved efficiency is achieved by constraining the search window of an AdaBoost face detector to pre-selected regions. Firstly, the proposed method takes a sparse grid of sample pixels from the image to reduce whole image scan time. A fusion of foreground segmentation and skin colour segmentation is then used to select candidate face regions. Finally, a classifier-based face detector is applied only to selected regions to verify the presence of a face (the Viola-Jones detector is used in this paper). The proposed system is evaluated using 640 x 480 pixels test images and compared with other relevant methods. Experimental results show that the proposed method reduces the detection time to 42 ms, where the Viola-Jones detector alone requires 565 ms (on a desktop processor). This improvement makes the face detector suitable for real-time applications. Furthermore, the proposed method requires 50% of the computation time of the best competing method, while reducing the false positive rate by 3.2% and maintaining the same hit rate.
Resumo:
This case-study explores alternative and experimental methods of research data acquisition, through an emerging research methodology, ‘Guerrilla Research Tactics’ [GRT]. The premise is that the researcher develops covert tactics for attracting and engaging with research participants. These methods range between simple analogue interventions to physical bespoke artefacts which contain an embedded digital link to a live, interactive data collecting resource, such as an online poll, survey or similar. These artefacts are purposefully placed in environments where the researcher anticipates an encounter and response from the potential research participant. The choice of design and placement of artefacts is specific and intentional. DESCRIPTION: Additional information may include: the outcomes; key factors or principles that contribute to its effectiveness; anticipated impact/evidence of impact. This case-study assesses the application of ‘Guerrilla Research Tactics’ [GRT] Methodology as an alternative, engaging and interactive method of data acquisition for higher degree research. Extending Gauntlett’s definition of ‘new creative methods… an alternative to language driven qualitative research methods' (2007), this case-study contributes to the existing body of literature addressing creative and interactive approaches to HDR data collection. The case-study was undertaken with Masters of Architecture and Urban Design research students at QUT, in 2012. Typically students within these creative disciplines view research as a taxing and boring process, distracting them from their studio design focus. An obstacle that many students face, is acquiring data from their intended participant groups. In response to these challenges the authors worked with students to develop creative, fun, and engaging research methods for both the students and their research participants. GRT are influenced by and developed from a combination of participatory action research (Kindon, 2008) and unobtrusive research methods (Kellehear, 1993), to enhance social research. GRT takes un-obtrusive research in a new direction, beyond the typical social research methods. The Masters research students developed alternative methods for acquiring data, which relied on a combination of analogue design interventions and online platforms commonly distributed through social networks. They identified critical issues that required action by the community, and the processes they developed focused on engaging with communities, to propose solutions. Key characteristics shared between both GRT and Guerrilla Activism, are notions of political issues, the unexpected, the unconventional, and being interactive, unique and thought provoking. The trend of Guerrilla Activism has been adapted to: marketing, communication, gardening, craftivism, theatre, poetry, and art. Focusing on the action element and examining elements of current trends within Guerrilla marketing, we believe that GRT can be applied to a range of research areas within various academic disciplines.
Resumo:
Smartphones are getting increasingly popular and several malwares appeared targeting these devices. General countermeasures to smartphone malwares are currently limited to signature-based antivirus scanners which efficiently detect known malwares, but they have serious shortcomings with new and unknown malwares creating a window of opportunity for attackers. As smartphones become host for sensitive data and applications, extended malware detection mechanisms are necessary complying with the corresponding resource constraints. The contribution of this paper is twofold. First, we perform static analysis on the executables to extract their function calls in Android environment using the command readelf. Function call lists are compared with malware executables for classifying them with PART, Prism and Nearest Neighbor Algorithms. Second, we present a collaborative malware detection approach to extend these results. Corresponding simulation results are presented.
Resumo:
This work investigates the accuracy and efficiency tradeoffs between centralized and collective (distributed) algorithms for (i) sampling, and (ii) n-way data analysis techniques in multidimensional stream data, such as Internet chatroom communications. Its contributions are threefold. First, we use the Kolmogorov-Smirnov goodness-of-fit test to show that statistical differences between real data obtained by collective sampling in time dimension from multiple servers and that of obtained from a single server are insignificant. Second, we show using the real data that collective data analysis of 3-way data arrays (users x keywords x time) known as high order tensors is more efficient than centralized algorithms with respect to both space and computational cost. Furthermore, we show that this gain is obtained without loss of accuracy. Third, we examine the sensitivity of collective constructions and analysis of high order data tensors to the choice of server selection and sampling window size. We construct 4-way tensors (users x keywords x time x servers) and analyze them to show the impact of server and window size selections on the results.
Resumo:
Extracting and aggregating the relevant event records relating to an identified security incident from the multitude of heterogeneous logs in an enterprise network is a difficult challenge. Presenting the information in a meaningful way is an additional challenge. This paper looks at solutions to this problem by first identifying three main transforms; log collection, correlation, and visual transformation. Having identified that the CEE project will address the first transform, this paper focuses on the second, while the third is left for future work. To aggregate by correlating event records we demonstrate the use of two correlation methods, simple and composite. These make use of a defined mapping schema and confidence values to dynamically query the normalised dataset and to constrain result events to within a time window. Doing so improves the quality of results, required for the iterative re-querying process being undertaken. Final results of the process are output as nodes and edges suitable for presentation as a network graph.
Resumo:
Long term exposure to vehicle emissions has been associated with harmful health effects. Children are amongst the most susceptible group and schools represent an environment where they can experience significant exposure to vehicle emissions. However, there are limited studies on children’s exposure to vehicle emissions in schools. The aim of this study was to quantify the concentration of organic aerosol and in particular, vehicle emissions that children are exposed to during school hours. Therefore an Aerodyne compact time-of-flight aerosol mass spectrometer (TOF-AMS) was deployed at five urban schools in Brisbane, Australia. The TOF-AMS enabled the chemical composition of the non- refractory (NR-PM1) to be analysed with a high temporal resolution to assess the concentration of vehicle emissions and other organic aerosols during school hours. At each school the organic fraction comprised the majority of NR-PM1 with secondary organic aerosols as the main constitute. At two of the schools, a significant source of the organic aerosol (OA) was slightly aged vehicle emissions from nearby highways. More aged and oxidised OA was observed at the other three schools, which also recorded strong biomass burning influences. Primary emissions were found to dominate the OA at only one school which had an O:C ratio of 0.17, due to fuel powered gardening equipment used near the TOF-AMS. The diurnal cycle of OA concentration varied between schools and was found to be at a minimum during school hours. The major organic component that school children were exposed to during school hours was secondary OA. Peak exposure of school children to HOA occurred during school drop off and pick up times. Unless a school is located near major roads, children are exposed predominately to regional secondary OA as opposed to local emissions during schools hours in urban environments.
Resumo:
Speaker diarization is the process of annotating an input audio with information that attributes temporal regions of the audio signal to their respective sources, which may include both speech and non-speech events. For speech regions, the diarization system also specifies the locations of speaker boundaries and assign relative speaker labels to each homogeneous segment of speech. In short, speaker diarization systems effectively answer the question of ‘who spoke when’. There are several important applications for speaker diarization technology, such as facilitating speaker indexing systems to allow users to directly access the relevant segments of interest within a given audio, and assisting with other downstream processes such as summarizing and parsing. When combined with automatic speech recognition (ASR) systems, the metadata extracted from a speaker diarization system can provide complementary information for ASR transcripts including the location of speaker turns and relative speaker segment labels, making the transcripts more readable. Speaker diarization output can also be used to localize the instances of specific speakers to pool data for model adaptation, which in turn boosts transcription accuracies. Speaker diarization therefore plays an important role as a preliminary step in automatic transcription of audio data. The aim of this work is to improve the usefulness and practicality of speaker diarization technology, through the reduction of diarization error rates. In particular, this research is focused on the segmentation and clustering stages within a diarization system. Although particular emphasis is placed on the broadcast news audio domain and systems developed throughout this work are also trained and tested on broadcast news data, the techniques proposed in this dissertation are also applicable to other domains including telephone conversations and meetings audio. Three main research themes were pursued: heuristic rules for speaker segmentation, modelling uncertainty in speaker model estimates, and modelling uncertainty in eigenvoice speaker modelling. The use of heuristic approaches for the speaker segmentation task was first investigated, with emphasis placed on minimizing missed boundary detections. A set of heuristic rules was proposed, to govern the detection and heuristic selection of candidate speaker segment boundaries. A second pass, using the same heuristic algorithm with a smaller window, was also proposed with the aim of improving detection of boundaries around short speaker segments. Compared to single threshold based methods, the proposed heuristic approach was shown to provide improved segmentation performance, leading to a reduction in the overall diarization error rate. Methods to model the uncertainty in speaker model estimates were developed, to address the difficulties associated with making segmentation and clustering decisions with limited data in the speaker segments. The Bayes factor, derived specifically for multivariate Gaussian speaker modelling, was introduced to account for the uncertainty of the speaker model estimates. The use of the Bayes factor also enabled the incorporation of prior information regarding the audio to aid segmentation and clustering decisions. The idea of modelling uncertainty in speaker model estimates was also extended to the eigenvoice speaker modelling framework for the speaker clustering task. Building on the application of Bayesian approaches to the speaker diarization problem, the proposed approach takes into account the uncertainty associated with the explicit estimation of the speaker factors. The proposed decision criteria, based on Bayesian theory, was shown to generally outperform their non- Bayesian counterparts.
Resumo:
Poem
Resumo:
Anisotropic damage distribution and evolution have a profound effect on borehole stress concentrations. Damage evolution is an irreversible process that is not adequately described within classical equilibrium thermodynamics. Therefore, we propose a constitutive model, based on non-equilibrium thermodynamics, that accounts for anisotropic damage distribution, anisotropic damage threshold and anisotropic damage evolution. We implemented this constitutive model numerically, using the finite element method, to calculate stress–strain curves and borehole stresses. The resulting stress–strain curves are distinctively different from linear elastic-brittle and linear elastic-ideal plastic constitutive models and realistically model experimental responses of brittle rocks. We show that the onset of damage evolution leads to an inhomogeneous redistribution of material properties and stresses along the borehole wall. The classical linear elastic-brittle approach to borehole stability analysis systematically overestimates the stress concentrations on the borehole wall, because dissipative strain-softening is underestimated. The proposed damage mechanics approach explicitly models dissipative behaviour and leads to non-conservative mud window estimations. Furthermore, anisotropic rocks with preferential planes of failure, like shales, can be addressed with our model.