350 resultados para Input timedelays
Resumo:
Commonwealth Scientific and Industrial Research Organization (CSIRO) has recently conducted a technology demonstration of a novel fixed wireless broadband access system in rural Australia. The system is based on multi user multiple-input multiple-output orthogonal frequency division multiplexing (MU-MIMO-OFDM). It demonstrated an uplink of six simultaneous users with distances ranging from 10 m to 8.5 km from a central tower, achieving 20 bits s/Hz spectrum efficiency. This paper reports on the analysis of channel capacity and bit error probability simulation based on the measured MUMIMO-OFDM channels obtained during the demonstration, and their comparison with the results based on channels simulated by a novel geometric optics based channel model suitable for MU-MIMO OFDM in rural areas. Despite its simplicity, the model was found to predict channel capacity and bit error rate probability accurately for a typical MU-MIMO-OFDM deployment scenario.
Resumo:
This note examines the productive efficiency of 62 starting guards during the 2011/12 National Basketball Association (NBA) season. This period coincides with the phenomenal and largely unanticipated performance of New York Knicks’ starting point guard Jeremy Lin and the attendant public and media hype known as Linsanity. We employ a data envelopment analysis (DEA) approach that includes allowance for an undesirable output, here turnovers per game, with the desirable outputs of points, rebounds, assists, steals, and blocks per game and an input of minutes per game. The results indicate that depending upon the specification, between 29 and 42 percent of NBA guards are fully efficient, including Jeremy Lin, with a mean inefficiency of 3.7 and 19.2 percent. However, while Jeremy Lin is technically efficient, he seldom serves as a benchmark for inefficient players, at least when compared with established players such as Chris Paul and Dwayne Wade. This suggests the uniqueness of Jeremy Lin’s productive solution and may explain why his unique style of play, encompassing individual brilliance, unselfish play, and team leadership, is of such broad public appeal.
Resumo:
The effects of tumour motion during radiation therapy delivery have been widely investigated. Motion effects have become increasingly important with the introduction of dynamic radiotherapy delivery modalities such as enhanced dynamic wedges (EDWs) and intensity modulated radiation therapy (IMRT) where a dynamically collimated radiation beam is delivered to the moving target, resulting in dose blurring and interplay effects which are a consequence of the combined tumor and beam motion. Prior to this work, reported studies on the EDW based interplay effects have been restricted to the use of experimental methods for assessing single-field non-fractionated treatments. In this work, the interplay effects have been investigated for EDW treatments. Single and multiple field treatments have been studied using experimental and Monte Carlo (MC) methods. Initially this work experimentally studies interplay effects for single-field non-fractionated EDW treatments, using radiation dosimetry systems placed on a sinusoidaly moving platform. A number of wedge angles (60º, 45º and 15º), field sizes (20 × 20, 10 × 10 and 5 × 5 cm2), amplitudes (10-40 mm in step of 10 mm) and periods (2 s, 3 s, 4.5 s and 6 s) of tumor motion are analysed (using gamma analysis) for parallel and perpendicular motions (where the tumor and jaw motions are either parallel or perpendicular to each other). For parallel motion it was found that both the amplitude and period of tumor motion affect the interplay, this becomes more prominent where the collimator tumor speeds become identical. For perpendicular motion the amplitude of tumor motion is the dominant factor where as varying the period of tumor motion has no observable effect on the dose distribution. The wedge angle results suggest that the use of a large wedge angle generates greater dose variation for both parallel and perpendicular motions. The use of small field size with a large tumor motion results in the loss of wedged dose distribution for both parallel and perpendicular motion. From these single field measurements a motion amplitude and period have been identified which show the poorest agreement between the target motion and dynamic delivery and these are used as the „worst case motion parameters.. The experimental work is then extended to multiple-field fractionated treatments. Here a number of pre-existing, multiple–field, wedged lung plans are delivered to the radiation dosimetry systems, employing the worst case motion parameters. Moreover a four field EDW lung plan (using a 4D CT data set) is delivered to the IMRT quality control phantom with dummy tumor insert over four fractions using the worst case parameters i.e. 40 mm amplitude and 6 s period values. The analysis of the film doses using gamma analysis at 3%-3mm indicate the non averaging of the interplay effects for this particular study with a gamma pass rate of 49%. To enable Monte Carlo modelling of the problem, the DYNJAWS component module (CM) of the BEAMnrc user code is validated and automated. DYNJAWS has been recently introduced to model the dynamic wedges. DYNJAWS is therefore commissioned for 6 MV and 10 MV photon energies. It is shown that this CM can accurately model the EDWs for a number of wedge angles and field sizes. The dynamic and step and shoot modes of the CM are compared for their accuracy in modelling the EDW. It is shown that dynamic mode is more accurate. An automation of the DYNJAWS specific input file has been carried out. This file specifies the probability of selection of a subfield and the respective jaw coordinates. This automation simplifies the generation of the BEAMnrc input files for DYNJAWS. The DYNJAWS commissioned model is then used to study multiple field EDW treatments using MC methods. The 4D CT data of an IMRT phantom with the dummy tumor is used to produce a set of Monte Carlo simulation phantoms, onto which the delivery of single field and multiple field EDW treatments is simulated. A number of static and motion multiple field EDW plans have been simulated. The comparison of dose volume histograms (DVHs) and gamma volume histograms (GVHs) for four field EDW treatments (where the collimator and patient motion is in the same direction) using small (15º) and large wedge angles (60º) indicates a greater mismatch between the static and motion cases for the large wedge angle. Finally, to use gel dosimetry as a validation tool, a new technique called the „zero-scan method. is developed for reading the gel dosimeters with x-ray computed tomography (CT). It has been shown that multiple scans of a gel dosimeter (in this case 360 scans) can be used to reconstruct a zero scan image. This zero scan image has a similar precision to an image obtained by averaging the CT images, without the additional dose delivered by the CT scans. In this investigation the interplay effects have been studied for single and multiple field fractionated EDW treatments using experimental and Monte Carlo methods. For using the Monte Carlo methods the DYNJAWS component module of the BEAMnrc code has been validated and automated and further used to study the interplay for multiple field EDW treatments. Zero-scan method, a new gel dosimetry readout technique has been developed for reading the gel images using x-ray CT without losing the precision and accuracy.
Resumo:
Residual amplitude modulation (RAM) mechanisms in electro-optic phase modulators are detrimental in applications that require high purity phase modulation of the incident laser beam. While the origins of RAMare not fully understood, measurements have revealed that it depends on the beam properties of the laser as well as the properties of the medium. Here we present experimental and theoretical results that demonstrate, for the first time, the dependence of RAM production in electro-optic phase modulators on beam intensity. The results show an order of magnitude increase in the level of RAM, around 10 dB, with a fifteenfold enhancement in the input intensity from 12 to 190 mW/mm 2. We show that this intensity dependent RAM is photorefractive in origin. © 2012 Optical Society of America.
Resumo:
This thesis develops a detailed conceptual design method and a system software architecture defined with a parametric and generative evolutionary design system to support an integrated interdisciplinary building design approach. The research recognises the need to shift design efforts toward the earliest phases of the design process to support crucial design decisions that have a substantial cost implication on the overall project budget. The overall motivation of the research is to improve the quality of designs produced at the author's employer, the General Directorate of Major Works (GDMW) of the Saudi Arabian Armed Forces. GDMW produces many buildings that have standard requirements, across a wide range of environmental and social circumstances. A rapid means of customising designs for local circumstances would have significant benefits. The research considers the use of evolutionary genetic algorithms in the design process and the ability to generate and assess a wider range of potential design solutions than a human could manage. This wider ranging assessment, during the early stages of the design process, means that the generated solutions will be more appropriate for the defined design problem. The research work proposes a design method and system that promotes a collaborative relationship between human creativity and the computer capability. The tectonic design approach is adopted as a process oriented design that values the process of design as much as the product. The aim is to connect the evolutionary systems to performance assessment applications, which are used as prioritised fitness functions. This will produce design solutions that respond to their environmental and function requirements. This integrated, interdisciplinary approach to design will produce solutions through a design process that considers and balances the requirements of all aspects of the design. Since this thesis covers a wide area of research material, 'methodological pluralism' approach was used, incorporating both prescriptive and descriptive research methods. Multiple models of research were combined and the overall research was undertaken following three main stages, conceptualisation, developmental and evaluation. The first two stages lay the foundations for the specification of the proposed system where key aspects of the system that have not previously been proven in the literature, were implemented to test the feasibility of the system. As a result of combining the existing knowledge in the area with the newlyverified key aspects of the proposed system, this research can form the base for a future software development project. The evaluation stage, which includes building the prototype system to test and evaluate the system performance based on the criteria defined in the earlier stage, is not within the scope this thesis. The research results in a conceptual design method and a proposed system software architecture. The proposed system is called the 'Hierarchical Evolutionary Algorithmic Design (HEAD) System'. The HEAD system has shown to be feasible through the initial illustrative paper-based simulation. The HEAD system consists of the two main components - 'Design Schema' and the 'Synthesis Algorithms'. The HEAD system reflects the major research contribution in the way it is conceptualised, while secondary contributions are achieved within the system components. The design schema provides constraints on the generation of designs, thus enabling the designer to create a wide range of potential designs that can then be analysed for desirable characteristics. The design schema supports the digital representation of the human creativity of designers into a dynamic design framework that can be encoded and then executed through the use of evolutionary genetic algorithms. The design schema incorporates 2D and 3D geometry and graph theory for space layout planning and building formation using the Lowest Common Design Denominator (LCDD) of a parameterised 2D module and a 3D structural module. This provides a bridge between the standard adjacency requirements and the evolutionary system. The use of graphs as an input to the evolutionary algorithm supports the introduction of constraints in a way that is not supported by standard evolutionary techniques. The process of design synthesis is guided as a higher level description of the building that supports geometrical constraints. The Synthesis Algorithms component analyses designs at four levels, 'Room', 'Layout', 'Building' and 'Optimisation'. At each level multiple fitness functions are embedded into the genetic algorithm to target the specific requirements of the relevant decomposed part of the design problem. Decomposing the design problem to allow for the design requirements of each level to be dealt with separately and then reassembling them in a bottom up approach reduces the generation of non-viable solutions through constraining the options available at the next higher level. The iterative approach, in exploring the range of design solutions through modification of the design schema as the understanding of the design problem improves, assists in identifying conflicts in the design requirements. Additionally, the hierarchical set-up allows the embedding of multiple fitness functions into the genetic algorithm, each relevant to a specific level. This supports an integrated multi-level, multi-disciplinary approach. The HEAD system promotes a collaborative relationship between human creativity and the computer capability. The design schema component, as the input to the procedural algorithms, enables the encoding of certain aspects of the designer's subjective creativity. By focusing on finding solutions for the relevant sub-problems at the appropriate levels of detail, the hierarchical nature of the system assist in the design decision-making process.
Resumo:
Soluble organic matter derived from exotic Pinus species has been shown to form stronger complexes with iron (Fe) than that derived from most native Australian species. It has also been proposed that the establishment of exotic Pinus plantations in coastal southeast Queensland may have enhanced the solubility of Fe in soils by increasing the amount of organically complexed Fe, but this remains inconclusive. In this study we test whether the concentration and speciation of Fe in soil water from Pinus plantations differs significantly from soil water from native vegetation areas. Both Fe redox speciation and the interaction between Fe and dissolved organic matter (DOM) were considered; Fe - DOM interaction was assessed using the Stockholm Humic Model. Iron concentrations (mainly Fe 2+) were greatest in the soil waters with the greatest DOM content collected from sandy podosols (Podzols), where they are largely controlled by redox potential. Iron concentrations were small in soil waters from clay and iron oxide-rich soils, in spite of similar redox potentials. This condition is related to stronger sorption on to the reactive clay and iron oxide mineral surfaces in these soils, which reduces the amount of DOM available for electron shuttling and microbial metabolism, restricting reductive dissolution of Fe. Vegetation type had no significant influence on the concentration and speciation of iron in soil waters, although DOM from Pinus sites had greater acidic functional group site densities than DOM from native vegetation sites. This is because Fe is mainly in the ferrous form, even in samples from the relatively well-drained podosols. However, modelling suggests that Pinus DOM can significantly increase the amount of truly dissolved ferric iron remaining in solution in oxic conditions. Therefore, the input of ferrous iron together with Pinus DOM to surface waters may reduce precipitation of hydrous ferric oxides (ferrihydrite) and increase the flux of dissolved Fe out of the catchment. Such inputs of iron are most probably derived from podosols planted with Pinus.
Resumo:
The volcanic succession on Montserrat provides an opportunity to examine the magmatic evolution of island arc volcanism over a ∼2.5 Ma period, extending from the andesites of the Silver Hills center, to the currently active Soufrière Hills volcano (February 2010). Here we present high-precision double-spike Pb isotope data, combined with trace element and Sr-Nd isotope data throughout this period of Montserrat's volcanic evolution. We demonstrate that each volcanic center; South Soufrière Hills, Soufrière Hills, Centre Hills and Silver Hills, can be clearly discriminated using trace element and isotopic parameters. Variations in these parameters suggest there have been systematic and episodic changes in the subduction input. The SSH center, in particular, has a greater slab fluid signature, as indicated by low Ce/Pb, but less sediment addition than the other volcanic centers, which have higher Th/Ce. Pb isotope data from Montserrat fall along two trends, the Silver Hills, Centre Hills and Soufrière Hills lie on a general trend of the Lesser Antilles volcanics, whereas SSH volcanics define a separate trend. The Soufrière Hills and SSH volcanic centers were erupted at approximately the same time, but retain distinctive isotopic signatures, suggesting that the SSH magmas have a different source to the other volcanic centers. We hypothesize that this rapid magmatic source change is controlled by the regional transtensional regime, which allowed the SSH magma to be extracted from a shallower source. The Pb isotopes indicate an interplay between subduction derived components and a MORB-like mantle wedge influenced by a Galapagos plume-like source.
Resumo:
This paper reports on a small-scale study, which looked into the impact of metacognitive instruction on listeners’ comprehension. Twenty-eight adult, Iranian, high-intermediate level EFL listeners participated in a “strategy-based” approach of advance organisation, directed attention, selective attention, and self-management in each of four listening lessons focused on improving listeners’ comprehension of IELTS listening texts. A comparison of pretest and posttest scores showed that the “less-skilled” listeners improved more than “more-skilled” listeners in the IELTS listening tests. Findings also supported the view that metacognitive instruction assisted listeners in considering the process of listening input and promoting listening comprehension ability.
Resumo:
Based on Participatory Action Research (PAR), the case studies in this paper examine the psychosocial benefits and outcomes for clients of community based Leg Clubs. The Leg Club model was developed in the United Kingdom (UK) to address the issue of social isolation and non-compliance to leg ulcer treatment. Principles underpinning the Leg Club are based on the Participatory Action Framework (PAR) where the input and involvement of participants is central. This study identifies the strengths of the Leg Club in enabling and empowering people to improve the social context in which they function. In addition it highlights the potential of expanding operations that are normally clinically based (particularly in relation to chronic conditions) but transferable to community settings in order that that they become “agents of change” for addressing such issues as social isolation and the accompanying challenges that these present, including no-compliance to treatment.
Resumo:
The somatosensory system plays an important role in balance control and age-related changes to this system have been implicated in falls. Parkinson’s disease (PD) is a chronic and progressive disease of the brain, characterized by postural instability and gait disturbance. Previous research has shown that deficiencies in somatosensory feedback may contribute to the poorer postural control demonstrated by PD individuals. However, few studies have comprehensively explored differences in somatosensory function and postural control between PD participants and healthy older individuals. The soles of the feet contain many cutaneous mechanoreceptors that provide important somatosensory information sources for postural control. Different types of insole devices have been developed to enhance this somatosensory information and improve postural stability, but these devices are often too complex and expensive to integrate into daily life. Textured insoles provide a more passive intervention that may be an inexpensive and accessible means to enhance the somatosensory input from the plantar surface of the feet. However, to date, there has been little work conducted to test the efficacy of enhanced somatosensory input induced by textured insoles in both healthy and PD populations during standing and walking. Therefore, the aims of this thesis were to determine: 1) whether textured insole surfaces can improve postural stability by enhancing somatosensory information in younger and older adults, 2) the differences between healthy older participants and PD participants for measures of physiological function and postural stability during standing and walking, 3) how changes in somatosensory information affect postural stability in both groups during standing and walking; and 4), whether textured insoles can improve postural stability in both groups during standing and walking. To address these aims, Study 1 recruited seven older individuals and ten healthy young controls to investigate the effects of two textured insole surfaces on postural stability while performing standing balance tests on a force plate. Participants were tested under three insole surface conditions: 1) barefoot; 2) standing on a hard textured insole surface; and 3), standing on a soft textured insole surface. Measurements derived from the centre of pressure displacement included the range of anterior-posterior and medial-lateral displacement, path length and the 90% confidence elliptical area (C90 area). Results of study 1 revealed a significant Group*Surface*Insole interaction for the four measures. Both textured insole surfaces reduced postural sway for the older group, especially in the eyes closed condition on the foam surface. However, participants reported that the soft textured insole surface was more comfortable and, hence, the soft textured insoles were adopted for Studies 2 and 3. For Study 2, 20 healthy older adults (controls) and 20 participants with Parkinson’s disease were recruited. Participants were evaluated using a series of physiological assessments that included touch sensitivity, vibratory perception, and pain and temperature threshold detection. Furthermore, nerve function and somatosensory evoked potentials tests were utilized to provide detailed information regarding peripheral nerve function for these participants. Standing balance and walking were assessed on different surfaces using a force plate and the 3D Vicon motion analysis system, respectively. Data derived from the force plate included the range of anterior-posterior and medial-lateral sway, while measures of stride length, stride period, cadence, double support time, stance phase, velocity and stride timing variability were reported for the walking assessment. The results of this study demonstrated that the PD group had decrements in somatosensory function compared to the healthy older control group. For electrodiagnosis, PD participants had poorer nerve function than controls, as evidenced by slower nerve conduction velocities and longer latencies in sural nerve and prolonged latency in the P37 somatosensory evoked potential. Furthermore, the PD group displayed more postural sway in both the anterior-posterior and medial-lateral directions relative to controls and these differences were increased when standing on a foam surface. With respect to the gait assessment, the PD group took shorter strides and had a reduced stride period compared with the control group. Furthermore, the PD group spent more time in the stance phase and had increased cadence and stride timing variability than the controls. Compared with walking on the firm surface, the two groups demonstrated different gait adaptations while walking on the uneven surface. Controls increased their stride length and stride period and decreased their cadence, which resulted in a consistent walking velocity on both surfaces. Conversely, while the PD patients also increased their stride period and decreased their cadence and stance period on the uneven surface, they did not increase their stride length and, hence walked slower on the uneven surface. In the PD group, there was a strong positive association between decreased somatosensory function and decreased clinical balance, as assessed by the Tinetti test. Poorer somatosensory function was also strongly positively correlated with the temporospatial gait parameters, especially shorter stride length. Study 3 evaluated the effects of manipulating the somatosensory information from the plantar surface of the feet using textured insoles in the same populations assessed in Study 2. For this study, participants performed the standing and walking balance tests under three footwear conditions: 1) barefoot; 2) with smooth insoles; and 3), with textured insoles. Standing balance and walking were evaluated using a force plate and a Vicon motion analysis system and the data were analysed in the same way outlined for Study 2. The findings showed that the smooth and textured insoles caused different effects on postural control during both the standing and walking trials. Both insoles decreased medial-lateral sway to the same level on the firm surface. The greatest benefits were observed in the PD group while wearing the textured insole. When standing under a more challenging condition on the foam surface with eyes closed, only the textured insole decreased medial-lateral sway in the PD group. With respect to the gait trials, both insoles increased walking velocity, stride length and stride time and decreased cadence, but these changes were more pronounced for the textured insoles. The effects of the textured insoles were evident under challenging conditions in the PD group and increased walking velocity and stride length, while decreasing cadence. Textured insoles were also effective in reducing the time spent in the double support and stance phases of the gait cycle and did not increase stride timing variability, as was the case for the smooth insoles for the PD group. The results of this study suggest that textured insoles, such as those evaluated in this research, may provide a low-cost means of improving postural stability in high-risk groups, such as people with PD, which may act as an important intervention to prevent falls.
Resumo:
The Monte Carlo DICOM Tool-Kit (MCDTK) is a software suite designed for treatment plan dose verification, using the BEAMnrc and DOSXYZnrc Monte Carlo codes. MCDTK converts DICOM-format treatment plan information into Monte Carlo input files and compares the results of Monte Carlo treatment simulations with conventional treatment planning dose calculations. In this study, a treatment is planned using a commercial treatment planning system, delivered to a pelvis phantom containing ten thermoluminescent dosimeters and simulated using BEAMnrc and DOSXYZnrc using inputs derived from MCDTK. The dosimetric accuracy of the Monte Carlo data is then evaluated via comparisons with the dose distribution obtained from the treatment planning system as well as the in-phantom point dose measurements. The simulated beam arrangement produced by MCDTK is found to be in geometric agreement with the planned treatment. An isodose display generated from the Monte Carlo data by MCDTK shows general agreement with the isodose display obtained from the treatment planning system, except for small regions around density heterogeneities in the phantom, where the pencil-beam dose calculation performed by the treatment planning systemis likely to be less accurate. All point dose measurements agree with the Monte Carlo data obtained using MCDTK, within confidence limits, and all except one of these point dose measurements show closer agreement with theMonte Carlo data than with the doses calculated by the treatment planning system. This study provides a simple demonstration of the geometric and dosimetric accuracy ofMonte Carlo simulations based on information from MCDTK.
Resumo:
Odometry is an important input to robot navigation systems, and we are interested in the performance of vision-only techniques. In this paper we experimentally evaluate and compare the performance of wheel odometry, monocular feature-based visual odometry, monocular patch-based visual odometry, and a technique that fuses wheel odometry and visual odometry, on a mobile robot operating in a typical indoor environment.
Resumo:
As one of the first institutional repositories in Australia and the first in the world to have an institution-wide deposit mandate, QUT ePrints has great ‘brand recognition’ within the University (Queensland University of Technology) and beyond. The repository is managed by the library but, over the years, the Library’s repository team has worked closely with other departments (especially the Office of Research and IT Services) to ensure that QUT ePrints was embedded into the business processes and systems our academics use regularly. For example, the repository is the source of the publication information which displays on each academic’s Staff Profile page. The repository pulls in citation data from Scopus and Web of Science and displays the data in the publications records. Researchers can monitor their citations at a glance via the repository ‘View’ which displays all their publications. A trend in recent years has been to populate institutional repositories with publication details imported from the University’s research information system (RIS). The main advantage of the RIS to Repository workflow is that it requires little input from the academics as the publication details are often imported into the RIS from publisher databases. Sadly, this is also its main disadvantage. Generally, only the metadata is imported from the RIS and the lack of engagement by the academics results in very low proportions of records with open access full-texts. Consequently, while we could see the value of integrating the two systems, we were determined to make the repository the entry point for publication data. In 2011, the University funded a project to convert a number of paper-based processes into web-based workflows. This included a workflow to replace the paper forms academics used to complete to report new publications (which were later used by the data entry staff to input the details into the RIS). Publication details and full-text files are uploaded to the repository (by the academics or their nominees). Each night, the repository (QUT ePrints) pushes the metadata for new publications into a holding table. The data is checked by Office of Research staff the next day and then ‘imported’ into the RIS. Publication details (including the repository URLs) are pushed from the RIS to the Staff Profiles system. Previously, academics were required to supply the Office of research with photocopies of their publication (for verification/auditing purposes). The repository is now the source of verification information. Library staff verify the accuracy of the publication details and, where applicable, the peer review status of the work. The verification metadata is included in the information passed to the Office of Research. The RIS at QUT comprises two separate systems built on an Oracle database; a proprietary product (ResearchMaster) plus a locally produced system known as RAD (Research Activity Database). The repository platform is EPrints which is built on a MySQL database. This partly explains why the data is passed from one system to the other via a holding table. The new workflow went live in early April 2012. Tests of the technical integration have all been successful. At the end of the first 12 months, the impact of the new workflow on the proportion of full-texts deposited will be evaluated.
Resumo:
Limited research is available on how well visual cues integrate with auditory cues to improve speech intelligibility in persons with visual impairments, such as cataracts. We investigated whether simulated cataracts interfered with participants’ ability to use visual cues to help disambiguate a spoken message in the presence of spoken background noise. We tested 21 young adults with normal visual acuity and hearing sensitivity. Speech intelligibility was tested under three conditions: auditory only with no visual input, auditory-visual with normal viewing, and auditory-visual with simulated cataracts. Central Institute for the Deaf (CID) Everyday Speech Sentences were spoken by a live talker, mimicking a pre-recorded audio track, in the presence of pre-recorded four-person background babble at a signal-to-noise ratio (SNR) of -13 dB. The talker was masked to the experimental conditions to control for experimenter bias. Relative to the normal vision condition, speech intelligibility was significantly poorer, [t (20) = 4.17, p < .01, Cohen’s d =1.0], in the simulated cataract condition. These results suggest that cataracts can interfere with speech perception, which may occur through a reduction in visual cues, less effective integration or a combination of the two effects. These novel findings contribute to our understanding of the association between two common sensory problems in adults: reduced contrast sensitivity associated with cataracts and reduced face-to-face communication in noise.
Resumo:
The concept of Six Sigma was initiated in the 1980s by Motorola. Since then it has been implemented in several manufacturing and service organizations. Till now Six Sigma implementation is mostly limited to healthcare and financial services in private sector. Its implementation is now gradually picking up in services such as call center, education, construction and related engineering etc. in private as well as public sector. Through a literature review, a questionnaire survey, and multiple case study approach the paper develops a conceptual framework to facilitate widening the scope of Six Sigma implementation in service organizations. Using grounded theory methodology, this study develops theory for Six Sigma implementation in service organizations. The study involves a questionnaire survey and case studies to understand and build a conceptual framework. The survey was conducted in service organizations in Singapore and exploratory in nature. The case studies involved three service organizations which implemented Six Sigma. The objective is to explore and understand the issues highlighted by the survey and the literature. The findings confirm the inclusion of critical success factors, critical-to-quality characteristics, and set of tools and techniques as observed from the literature. In case of key performance indicator, there are different interpretations about it in literature and also by industry practitioners. Some literature explain key performance indicator as performance metrics whereas some feel it as key process input or output variables, which is similar to interpretations by practitioners of Six Sigma. The response of not relevant and unknown to us as reasons for not implementing Six Sigma shows the need for understanding specific requirements of service organizations. Though much theoretical description is available about Six Sigma, but there has been limited rigorous academic research on it. This gap is far more pronounced about Six Sigma implementation in service organizations, where the theory is not mature enough. Identifying this need, the study contributes by going through theory building exercise and developing a conceptual framework to understand the issues involving its implementation in service organizations.