891 resultados para Perceptual image quality


Relevância:

20.00% 20.00%

Publicador:

Resumo:

While Information services function’s (ISF) service quality is not a new concept and has received considerable attention for over two decades, cross-cultural research of ISF’s service quality is not very mature. The author argues that the relationship between cultural dimensions and the ISF’s service quality dimensions may provide useful insights for how organisations should deal with different cultural groups. This paper will show that ISF’s service quality dimensions vary from one culture to another. The study adopts Hofstede’s (1980, 1991) typology of cultures and the “zones of tolerance” (ZOT) service quality measure reported by Kettinger & Lee (2005) as the primary commencing theory-base. In this paper, the author hypothesised and tested the influences of culture on users’ service quality perceptions and found strong empirical support for the study’s hypotheses. The results of this study indicate that as a result of their cultural characteristics, users vary in both their overall service quality perceptions and their perceptions on each of the four dimensions of ZOT service quality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Australia is leading the way in establishing a national system (the Palliative Care Outcomes Collaboration – PCOC) to measure the outcomes and quality of specialist palliative care services and to benchmark services across the country. This article reports on analysis of data collected routinely at point-of-care on 5939 patients treated by the first fifty one services that voluntarily joined PCOC. By March 2009, 111 services have agreed to join PCOC, representing more than 70% of services and more than 80% of specialist palliative care patients nationally. All states and territories are involved in this unique process that has involved extensive consultation and infrastructure and close collaboration between health services and researchers. The challenges of dealing with wide variation in outcomes and practice and the progress achieved to date are described. PCOC is aiming to improve understanding of the reasons for variations in clinical outcomes between specialist palliative care patients and differences in service outcomes as a critical step in an ongoing process to improve both service quality and patient outcomes. What is known about the topic? Governments internationally are grappling with how best to provide care for people with life limiting illnesses and how best to measure the outcomes and quality of that care. There is little international evidence on how to measure the quality and outcomes of palliative care on a routine basis. What does this paper add? The Palliative Care Outcomes Collaboration (PCOC) is the first effort internationally to measure the outcomes and quality of specialist palliative care services and to benchmark services on a national basis through an independent third party. What are the implications for practitioners? If outcomes and quality are to be measured on a consistent national basis, standard clinical assessment tools that are used as part of everyday clinical practice are necessary.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports on the empirical comparison of seven machine learning algorithms in texture classification with application to vegetation management in power line corridors. Aiming at classifying tree species in power line corridors, object-based method is employed. Individual tree crowns are segmented as the basic classification units and three classic texture features are extracted as the input to the classification algorithms. Several widely used performance metrics are used to evaluate the classification algorithms. The experimental results demonstrate that the classification performance depends on the performance matrix, the characteristics of datasets and the feature used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A range of interventions are being implemented in Australia to apprehend and deter drug driving behaviour, in particular the recent implementation of random roadside drug testing procedures in Queensland. Given this countermeasure has a strong deterrence foundation, it is of interest to determine whether deterrence-based perceptual factors are influencing this offending behaviour or whether self-reported drug driving is heavily dependent upon illicit substance consumption levels and past offending behaviour. This study involves a sample of Queensland motorists (N = 898) who completed a self-report questionnaire that collected a range of information, including drug driving and drug consumption practices, conviction history, and perceptual deterrence factors. The aim was to examine what factors influence current drug driving behaviours. Analysis of the collected data revealed that approximately 20% of participants reported drug driving at least once in the last six months. Overall, there was considerable variability in the respondents' perceptions regarding the certainty, severity and swiftness of legal sanctions, although the largest proportion of the sample did not consider such sanctions to be certain, severe or swift. In regard to predicting those who intended to drug drive again in the future, a combination of perceptual and behavioural-based factors were associated with such intentions. However, a closer examination revealed that behaviours, rather than perceptions, proved to have a greater level of influence on the current sample's future intentions to offend. This paper further outlines the major findings of the study and highlights that multi-modal interventions are most likely required to reduce the prevalence of drug driving on public roads.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Loneliness and low mood are associated with significant negative health outcomes including poor sleep, but the strength of the evidence underlying these associations varies. There is strong evidence that poor sleep quality and low mood are linked, but only emerging evidence that loneliness and poor sleep are associated. Aims: To independently replicate the finding that loneliness and poor subjective sleep quality are associated and to extend past research by investigating lifestyle regularity as a possible mediator of relationships, since lifestyle regularity has been linked to loneliness and poor sleep. Methods: Using a cross-sectional design, 97 adults completed standardized measures of loneliness, lifestyle regularity, subjective sleep quality and mood. Results: Loneliness was a significant predictor of sleep quality. Lifestyle regularity was not a predictor of, nor associated with, mood, sleep quality or loneliness. Conclusions: This study provides an important independent replication of the association between poor sleep and loneliness. However, the mechanism underlying this link remains unclear. A theoretically plausible mechanism for this link, lifestyle regularity, does not explain the relationship between loneliness and poor sleep. The nexus between loneliness and poor sleep is unlikely to be broken by altering the social rhythm of patients who present with poor sleep and loneliness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advances in digital technology have caused a radical shift in moving image culture. This has occurred in both modes of production and sites of exhibition, resulting in a blurring of boundaries that previously defined a range of creative disciplines. Re-Imagining Animation: The Changing Face of the Moving Image, by Paul Wells and Johnny Hardstaff, argues that as a result of these blurred disciplinary boundaries, the term “animation” has become a “catch all” for describing any form of manipulated moving image practice. Understanding animation predicates the need to (re)define the medium within contemporary moving image culture. Via a series of case studies, the book engages with a range of moving image works, interrogating “how the many and varied approaches to making film, graphics, visual artefacts, multimedia and other intimations of motion pictures can now be delineated and understood” (p. 7). The structure and clarity of content make this book ideally suited to any serious study of contemporary animation which accepts animation as a truly interdisciplinary medium.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim Australian residential aged care does not have a system of quality assessment related to clinical outcomes, or comprehensive quality benchmarking. The Residential Care Quality Assessment was developed to fill this gap; and this paper discusses the process by which preliminary benchmarks representing high and low quality were developed for it. Methods Data were collected from all residents (n = 498) of nine facilities. Numerator–denominator analysis of clinical outcomes occurred at a facility-level, with rank-ordered results circulated to an expert panel. The panel identified threshold scores to indicate excellent and questionable care quality, and refined these through Delphi process. Results Clinical outcomes varied both within and between facilities; agreed thresholds for excellent and poor outcomes were finalised after three Delphi rounds. Conclusion Use of the Residential Care Quality Assessment provides a concrete means of monitoring care quality and allows benchmarking across facilities; its regular use could contribute to improved care outcomes within residential aged care in Australia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research shows that gross pollutant traps (GPTs) continue to play an important role in preventing visible street waste—gross pollutants—from contaminating the environment. The demand for these GPTs calls for stringent quality control and this research provides a foundation to rigorously examine the devices. A novel and comprehensive testing approach to examine a dry sump GPT was developed. The GPT is designed with internal screens to capture gross pollutants—organic matter and anthropogenic litter. This device has not been previously investigated. Apart from the review of GPTs and gross pollutant data, the testing approach includes four additional aspects to this research, which are: field work and an historical overview of street waste/stormwater pollution, calibration of equipment, hydrodynamic studies and gross pollutant capture/retention investigations. This work is the first comprehensive investigation of its kind and provides valuable practical information for the current research and any future work pertaining to the operations of GPTs and management of street waste in the urban environment. Gross pollutant traps—including patented and registered designs developed by industry—have specific internal configurations and hydrodynamic separation characteristics which demand individual testing and performance assessments. Stormwater devices are usually evaluated by environmental protection agencies (EPAs), professional bodies and water research centres. In the USA, the American Society of Civil Engineers (ASCE) and the Environmental Water Resource Institute (EWRI) are examples of professional and research organisations actively involved in these evaluation/verification programs. These programs largely rely on field evaluations alone that are limited in scope, mainly for cost and logistical reasons. In Australia, evaluation/verification programs of new devices in the stormwater industry are not well established. The current limitations in the evaluation methodologies of GPTs have been addressed in this research by establishing a new testing approach. This approach uses a combination of physical and theoretical models to examine in detail the hydrodynamic and capture/retention characteristics of the GPT. The physical model consisted of a 50% scale model GPT rig with screen blockages varying from 0 to 100%. This rig was placed in a 20 m flume and various inlet and outflow operating conditions were modelled on observations made during the field monitoring of GPTs. Due to infrequent cleaning, the retaining screens inside the GPTs were often observed to be blocked with organic matter. Blocked screens can radically change the hydrodynamic and gross pollutant capture/retention characteristics of a GPT as shown from this research. This research involved the use of equipment, such as acoustic Doppler velocimeters (ADVs) and dye concentration (Komori) probes, which were deployed for the first time in a dry sump GPT. Hence, it was necessary to rigorously evaluate the capability and performance of these devices, particularly in the case of the custom made Komori probes, about which little was known. The evaluation revealed that the Komori probes have a frequency response of up to 100 Hz —which is dependent upon fluid velocities—and this was adequate to measure the relevant fluctuations of dye introduced into the GPT flow domain. The outcome of this evaluation resulted in establishing methodologies for the hydrodynamic measurements and gross pollutant capture/retention experiments. The hydrodynamic measurements consisted of point-based acoustic Doppler velocimeter (ADV) measurements, flow field particle image velocimetry (PIV) capture, head loss experiments and computational fluid dynamics (CFD) simulation. The gross pollutant capture/retention experiments included the use of anthropogenic litter components, tracer dye and custom modified artificial gross pollutants. Anthropogenic litter was limited to tin cans, bottle caps and plastic bags, while the artificial pollutants consisted of 40 mm spheres with a range of four buoyancies. The hydrodynamic results led to the definition of global and local flow features. The gross pollutant capture/retention results showed that when the internal retaining screens are fully blocked, the capture/retention performance of the GPT rapidly deteriorates. The overall results showed that the GPT will operate efficiently until at least 70% of the screens are blocked, particularly at high flow rates. This important finding indicates that cleaning operations could be more effectively planned when the GPT capture/retention performance deteriorates. At lower flow rates, the capture/retention performance trends were reversed. There is little difference in the poor capture/retention performance between a fully blocked GPT and a partially filled or empty GPT with 100% screen blockages. The results also revealed that the GPT is designed with an efficient high flow bypass system to avoid upstream blockages. The capture/retention performance of the GPT at medium to high inlet flow rates is close to maximum efficiency (100%). With regard to the design appraisal of the GPT, a raised inlet offers a better capture/retention performance, particularly at lower flow rates. Further design appraisals of the GPT are recommended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Process modeling is a central element in any approach to Business Process Management (BPM). However, what hinders both practitioners and academics is the lack of support for assessing the quality of process models – let alone realizing high quality process models. Existing frameworks are highly conceptual or too general. At the same time, various techniques, tools, and research results are available that cover fragments of the issue at hand. This chapter presents the SIQ framework that on the one hand integrates concepts and guidelines from existing ones and on the other links these concepts to current research in the BPM domain. Three different types of quality are distinguished and for each of these levels concrete metrics, available tools, and guidelines will be provided. While the basis of the SIQ framework is thought to be rather robust, its external pointers can be updated with newer insights as they emerge.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite the global financial downturn, the Australian rail industry is in a period of expansion. Reports indicate that the industry is not attracting sufficient entry level and mid-career engineers and skilled technicians from within the Australian labour market and is facing widespread retirements from an ageing workforce. This paper reports on a completed qualitative study that explores the perceptions of engineering students, their lecturers, careers advisors and recruitment consultants regarding rail as a brand and of careers in the rail industry. Findings are presented about career knowledge, job characteristic preferences, branding and image and indicate that rail as a brand has a dated image, that young people and their influencers have little knowledge of rail careers and that rail could better focus its image and recruitment strategies. Conclusions include suggestions for more effective attraction and image strategies for the industry and for further research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The quality and bitrate modeling is essential to effectively adapt the bitrate and quality of videos when delivered to multiplatform devices over resource constraint heterogeneous networks. The recent model proposed by Wang et al. estimates the bitrate and quality of videos in terms of the frame rate and quantization parameter. However, to build an effective video adaptation framework, it is crucial to incorporate the spatial resolution in the analytical model for bitrate and perceptual quality adaptation. Hence, this paper proposes an analytical model to estimate the bitrate of videos in terms of quantization parameter, frame rate, and spatial resolution. The model can fit the measured data accurately which is evident from the high Pearson correlation. The proposed model is based on the observation that the relative reduction in bitrate due to decreasing spatial resolution is independent of the quantization parameter and frame rate. This modeling can be used for rate-constrained bit-stream adaptation scheme which selects the scalability parameters to optimize the perceptual quality for a given bandwidth constraint.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Goals: Few studies have repeatedly evaluated quality of life and potentially relevant factors in patients with benign primary brain tumor. The purpose of this study was to explore the relationship between the experience of the symptom distress, functional status, depression, and quality of life prior to surgery (T1) and 1 month post-discharge (T2). ---------- Patients and methods: This was a prospective cohort study including 58 patients with benign primary brain tumor in one teaching hospital in the Taipei area of Taiwan. The research instruments included the M.D. Anderson Symptom Inventory, the Functional Independence Measure scale, the Hospital Depression Scale, and the Functional Assessment of Cancer Therapy-Brain.---------- Results: Symptom distress (T1: r=−0.90, p<0.01; T2: r=−0.52, p<0.01), functional status (T1: r=0.56, p<0.01), and depression (T1: r=−0.71, p<0.01) demonstrated a significant relationship with patients' quality of life. Multivariate analysis identified symptom distress (explained 80.2%, Rinc 2=0.802, p=0.001) and depression (explained 5.2%, Rinc 2=0.052, p<0.001) continued to have a significant independent influence on quality of life prior to surgery (T1) after controlling for key demographic and medical variables. Furthermore, only symptom distress (explained 27.1%, Rinc 2=0.271, p=0.001) continued to have a significant independent influence on quality of life at 1 month after discharge (T2).---------- Conclusions: The study highlights the potential importance of a patient's symptom distress on quality of life prior to and following surgery. Health professionals should inquire about symptom distress over time. Specific interventions for symptoms may improve the symptom impact on quality of life. Additional studies should evaluate symptom distress on longer-term quality of life of patients with benign brain tumor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Internet and Web services have been used in both teaching and learning and are gaining popularity in today’s world. E-Learning is becoming popular and considered the latest advance in technology based learning. Despite the potential advantages for learning in a small country like Bhutan, there is lack of eServices at the Paro College of Education. This study investigated students’ attitudes towards online communities and frequency of access to the Internet, and how students locate and use different sources of information in their project tasks. Since improvement was at the heart of this research, an action research approach was used. Based on the idea of purposeful sampling, a semi-structured interview and observations were used as data collection instruments. 10 randomly selected students (5 girls and 5 boys) participated in this research as the controlled group. The study findings indicated that there is a lack of educational information technology services, such as e-learning at the college. Internet connection being very slow was the main barrier to learning using e-learning or accessing Internet resources. There is a strong relationship between the quality of written task and the source of the information, and between Web searching and learning. The source of information used in assignments and project work is limited to books in the library which are often outdated and of poor quality. Project tasks submitted by most of the students were of poor quality.