687 resultados para Technological quality
Resumo:
Background, aim, and scope Urban motor vehicle fleets are a major source of particulate matter pollution, especially of ultrafine particles (diameters < 0.1 µm), and exposure to particulate matter has known serious health effects. A considerable body of literature is available on vehicle particle emission factors derived using a wide range of different measurement methods for different particle sizes, conducted in different parts of the world. Therefore the choice as to which are the most suitable particle emission factors to use in transport modelling and health impact assessments presented as a very difficult task. The aim of this study was to derive a comprehensive set of tailpipe particle emission factors for different vehicle and road type combinations, covering the full size range of particles emitted, which are suitable for modelling urban fleet emissions. Materials and methods A large body of data available in the international literature on particle emission factors for motor vehicles derived from measurement studies was compiled and subjected to advanced statistical analysis, to determine the most suitable emission factors to use in modelling urban fleet emissions. Results This analysis resulted in the development of five statistical models which explained 86%, 93%, 87%, 65% and 47% of the variation in published emission factors for particle number, particle volume, PM1, PM2.5 and PM10 respectively. A sixth model for total particle mass was proposed but no significant explanatory variables were identified in the analysis. From the outputs of these statistical models, the most suitable particle emission factors were selected. This selection was based on examination of the statistical robustness of the statistical model outputs, including consideration of conservative average particle emission factors with the lowest standard errors, narrowest 95% confidence intervals and largest sample sizes, and the explanatory model variables, which were Vehicle Type (all particle metrics), Instrumentation (particle number and PM2.5), Road Type (PM10) and Size Range Measured and Speed Limit on the Road (particle volume). Discussion A multiplicity of factors need to be considered in determining emission factors that are suitable for modelling motor vehicle emissions, and this study derived a set of average emission factors suitable for quantifying motor vehicle tailpipe particle emissions in developed countries. Conclusions The comprehensive set of tailpipe particle emission factors presented in this study for different vehicle and road type combinations enable the full size range of particles generated by fleets to be quantified, including ultrafine particles (measured in terms of particle number). These emission factors have particular application for regions which may have a lack of funding to undertake measurements, or insufficient measurement data upon which to derive emission factors for their region. Recommendations and perspectives In urban areas motor vehicles continue to be a major source of particulate matter pollution and of ultrafine particles. It is critical that in order to manage this major pollution source methods are available to quantify the full size range of particles emitted for traffic modelling and health impact assessments.
Resumo:
From the business viewpoint, the railway timetable is a list of the products presented by the railway transportation operators to the customers, specifying the schedules of all the train services on a railway line or network. In order to evaluate the quality of the train service schedules, a number of indices are proposed in this paper. These indices primarily take the passengers’ needs, such as waiting time, transfer time and transport capacity into consideration. Delay rate is usually used in post-evaluation. In this study, we propose to give an evaluation on the probability that the scheduled train services are likely to be delayed and the recovery ability of the timetable after delay has occurred. The evaluation identifies the possible problems in the services, such as excessive waiting time, non-seamless transfer, and high possibility of delay. This paper also discusses the improvement of these problems through certain adjustments on the timetable. The indices for evaluation and the adjustment method on timetable are then applied to a case study on the Hu-Ning-Hang railway in China, followed by the discussions of the merits of the proposed indices for timetable evaluation and possible improvement.
Resumo:
In Australia rural research and development corporations and companies expended over $AUS500 million on agricultural research and development. A substantial proportion of this is invested in R&D in the beef industry. The Australian beef industry exports almost $AUS5billionof product annually and invest heavily in new product development to improve the beef quality and improve production efficiency. Review points are critical for effective new product development, yet many research and development bodies, particularly publicly funded ones, appear to ignore the importance of assessing products prior to their release. Significant sums of money are invested in developing technological innovations that have low levels and rates of adoption. The adoption rates could be improved if the developers were more focused on technology uptake and less focused on proving their technologies can be applied in practice. Several approaches have been put forward in an effort to improve rates of adoption into operational settings. This paper presents a study of key technological innovations in the Australian beef industry to assess the use of multiple criteria in evaluating the potential uptake of new technologies. Findings indicate that using multiple criteria to evaluate innovations before commercializing a technology enables researchers to better understand the issues that may inhibit adoption.
Resumo:
This article explores the quality of accounting information in listed family firms. The authors exploit the features of the Italian equitymarket characterizd by high ownership concentration across all tpes of firms to disentangle the effects of family ownership from other major block holders on the quality of accounting information. The findings document that family firms convey financial information of higher quality compared to the nonfamily peers. Furthermore the authors provide evidence that the determinants of accounting quality differ across family and nonfamily firms.
Resumo:
While Information services function’s (ISF) service quality is not a new concept and has received considerable attention for over two decades, cross-cultural research of ISF’s service quality is not very mature. The author argues that the relationship between cultural dimensions and the ISF’s service quality dimensions may provide useful insights for how organisations should deal with different cultural groups. This paper will show that ISF’s service quality dimensions vary from one culture to another. The study adopts Hofstede’s (1980, 1991) typology of cultures and the “zones of tolerance” (ZOT) service quality measure reported by Kettinger & Lee (2005) as the primary commencing theory-base. In this paper, the author hypothesised and tested the influences of culture on users’ service quality perceptions and found strong empirical support for the study’s hypotheses. The results of this study indicate that as a result of their cultural characteristics, users vary in both their overall service quality perceptions and their perceptions on each of the four dimensions of ZOT service quality.
Resumo:
Agricultural adoption of innovation has traditionally been described as slow to diffuse. This paper therefore describes a case study grounded in PD to address a disruptive technology/system within the livestock industry. Results of the process were positive, as active engagement of stakeholders returned rich data. The contribution of the work is also presented as grounds for further design research in the livestock industry.
Resumo:
Australia is leading the way in establishing a national system (the Palliative Care Outcomes Collaboration – PCOC) to measure the outcomes and quality of specialist palliative care services and to benchmark services across the country. This article reports on analysis of data collected routinely at point-of-care on 5939 patients treated by the first fifty one services that voluntarily joined PCOC. By March 2009, 111 services have agreed to join PCOC, representing more than 70% of services and more than 80% of specialist palliative care patients nationally. All states and territories are involved in this unique process that has involved extensive consultation and infrastructure and close collaboration between health services and researchers. The challenges of dealing with wide variation in outcomes and practice and the progress achieved to date are described. PCOC is aiming to improve understanding of the reasons for variations in clinical outcomes between specialist palliative care patients and differences in service outcomes as a critical step in an ongoing process to improve both service quality and patient outcomes. What is known about the topic? Governments internationally are grappling with how best to provide care for people with life limiting illnesses and how best to measure the outcomes and quality of that care. There is little international evidence on how to measure the quality and outcomes of palliative care on a routine basis. What does this paper add? The Palliative Care Outcomes Collaboration (PCOC) is the first effort internationally to measure the outcomes and quality of specialist palliative care services and to benchmark services on a national basis through an independent third party. What are the implications for practitioners? If outcomes and quality are to be measured on a consistent national basis, standard clinical assessment tools that are used as part of everyday clinical practice are necessary.
Resumo:
The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.
Resumo:
Background: Loneliness and low mood are associated with significant negative health outcomes including poor sleep, but the strength of the evidence underlying these associations varies. There is strong evidence that poor sleep quality and low mood are linked, but only emerging evidence that loneliness and poor sleep are associated. Aims: To independently replicate the finding that loneliness and poor subjective sleep quality are associated and to extend past research by investigating lifestyle regularity as a possible mediator of relationships, since lifestyle regularity has been linked to loneliness and poor sleep. Methods: Using a cross-sectional design, 97 adults completed standardized measures of loneliness, lifestyle regularity, subjective sleep quality and mood. Results: Loneliness was a significant predictor of sleep quality. Lifestyle regularity was not a predictor of, nor associated with, mood, sleep quality or loneliness. Conclusions: This study provides an important independent replication of the association between poor sleep and loneliness. However, the mechanism underlying this link remains unclear. A theoretically plausible mechanism for this link, lifestyle regularity, does not explain the relationship between loneliness and poor sleep. The nexus between loneliness and poor sleep is unlikely to be broken by altering the social rhythm of patients who present with poor sleep and loneliness.
Resumo:
Aim Australian residential aged care does not have a system of quality assessment related to clinical outcomes, or comprehensive quality benchmarking. The Residential Care Quality Assessment was developed to fill this gap; and this paper discusses the process by which preliminary benchmarks representing high and low quality were developed for it. Methods Data were collected from all residents (n = 498) of nine facilities. Numerator–denominator analysis of clinical outcomes occurred at a facility-level, with rank-ordered results circulated to an expert panel. The panel identified threshold scores to indicate excellent and questionable care quality, and refined these through Delphi process. Results Clinical outcomes varied both within and between facilities; agreed thresholds for excellent and poor outcomes were finalised after three Delphi rounds. Conclusion Use of the Residential Care Quality Assessment provides a concrete means of monitoring care quality and allows benchmarking across facilities; its regular use could contribute to improved care outcomes within residential aged care in Australia.
Resumo:
Process modeling is a central element in any approach to Business Process Management (BPM). However, what hinders both practitioners and academics is the lack of support for assessing the quality of process models – let alone realizing high quality process models. Existing frameworks are highly conceptual or too general. At the same time, various techniques, tools, and research results are available that cover fragments of the issue at hand. This chapter presents the SIQ framework that on the one hand integrates concepts and guidelines from existing ones and on the other links these concepts to current research in the BPM domain. Three different types of quality are distinguished and for each of these levels concrete metrics, available tools, and guidelines will be provided. While the basis of the SIQ framework is thought to be rather robust, its external pointers can be updated with newer insights as they emerge.
Resumo:
Goals: Few studies have repeatedly evaluated quality of life and potentially relevant factors in patients with benign primary brain tumor. The purpose of this study was to explore the relationship between the experience of the symptom distress, functional status, depression, and quality of life prior to surgery (T1) and 1 month post-discharge (T2). ---------- Patients and methods: This was a prospective cohort study including 58 patients with benign primary brain tumor in one teaching hospital in the Taipei area of Taiwan. The research instruments included the M.D. Anderson Symptom Inventory, the Functional Independence Measure scale, the Hospital Depression Scale, and the Functional Assessment of Cancer Therapy-Brain.---------- Results: Symptom distress (T1: r=−0.90, p<0.01; T2: r=−0.52, p<0.01), functional status (T1: r=0.56, p<0.01), and depression (T1: r=−0.71, p<0.01) demonstrated a significant relationship with patients' quality of life. Multivariate analysis identified symptom distress (explained 80.2%, Rinc 2=0.802, p=0.001) and depression (explained 5.2%, Rinc 2=0.052, p<0.001) continued to have a significant independent influence on quality of life prior to surgery (T1) after controlling for key demographic and medical variables. Furthermore, only symptom distress (explained 27.1%, Rinc 2=0.271, p=0.001) continued to have a significant independent influence on quality of life at 1 month after discharge (T2).---------- Conclusions: The study highlights the potential importance of a patient's symptom distress on quality of life prior to and following surgery. Health professionals should inquire about symptom distress over time. Specific interventions for symptoms may improve the symptom impact on quality of life. Additional studies should evaluate symptom distress on longer-term quality of life of patients with benign brain tumor.
Resumo:
Internet and Web services have been used in both teaching and learning and are gaining popularity in today’s world. E-Learning is becoming popular and considered the latest advance in technology based learning. Despite the potential advantages for learning in a small country like Bhutan, there is lack of eServices at the Paro College of Education. This study investigated students’ attitudes towards online communities and frequency of access to the Internet, and how students locate and use different sources of information in their project tasks. Since improvement was at the heart of this research, an action research approach was used. Based on the idea of purposeful sampling, a semi-structured interview and observations were used as data collection instruments. 10 randomly selected students (5 girls and 5 boys) participated in this research as the controlled group. The study findings indicated that there is a lack of educational information technology services, such as e-learning at the college. Internet connection being very slow was the main barrier to learning using e-learning or accessing Internet resources. There is a strong relationship between the quality of written task and the source of the information, and between Web searching and learning. The source of information used in assignments and project work is limited to books in the library which are often outdated and of poor quality. Project tasks submitted by most of the students were of poor quality.
Resumo:
Advances in data mining have provided techniques for automatically discovering underlying knowledge and extracting useful information from large volumes of data. Data mining offers tools for quick discovery of relationships, patterns and knowledge in large complex databases. Application of data mining to manufacturing is relatively limited mainly because of complexity of manufacturing data. Growing self organizing map (GSOM) algorithm has been proven to be an efficient algorithm to analyze unsupervised DNA data. However, it produced unsatisfactory clustering when used on some large manufacturing data. In this paper a data mining methodology has been proposed using a GSOM tool which was developed using a modified GSOM algorithm. The proposed method is used to generate clusters for good and faulty products from a manufacturing dataset. The clustering quality (CQ) measure proposed in the paper is used to evaluate the performance of the cluster maps. The paper also proposed an automatic identification of variables to find the most probable causative factor(s) that discriminate between good and faulty product by quickly examining the historical manufacturing data. The proposed method offers the manufacturers to smoothen the production flow and improve the quality of the products. Simulation results on small and large manufacturing data show the effectiveness of the proposed method.
Resumo:
This paper presents the results of a pilot study examining the factors that impact most on the effective implementation of, and improvement to, Quality Mangement Sytems (QMSs) amongst Indonesian construction companies. Nine critical factors were identified from an extensive literature review, and a survey was conducted of 23 respondents from three specific groups (Quality Managers, Project Managers, and Site Engineers) undertaking work in the Indonesian infrastructure construction sector. The data has been analyzed initially using simple descriptive techniques. This study reveals that different groups within the sector have different opinions of the factors regardless of the degree of importance of each factor. However, the evaluation of construction project success and the incentive schemes for high performance staff, are the two factors that were considered very important by most of the respondents in all three groups. In terms of their assessment of tools for measuring contractor’s performance, additional QMS guidelines, techniques related to QMS practice provided by the Government, and benchmarking, a clear majority in each group regarded their usefulness as ‘of some importance’.