223 resultados para minimally processed
Resumo:
Introduction Many bilinguals will have had the experience of unintentionally reading something in a language other than the intended one (e.g. MUG to mean mosquito in Dutch rather than a receptacle for a hot drink, as one of the possible intended English meanings), of finding themselves blocked on a word for which many alternatives suggest themselves (but, somewhat annoyingly, not in the right language), of their accent changing when stressed or tired and, occasionally, of starting to speak in a language that is not understood by those around them. These instances where lexical access appears compromised and control over language behavior is reduced hint at the intricate structure of the bilingual lexical architecture and the complexity of the processes by which knowledge is accessed and retrieved. While bilinguals might tend to blame word finding and other language problems on their bilinguality, these difficulties per se are not unique to the bilingual population. However, what is unique, and yet far more common than is appreciated by monolinguals, is the cognitive architecture that subserves bilingual language processing. With bilingualism (and multilingualism) the rule rather than the exception (Grosjean, 1982), this architecture may well be the default structure of the language processing system. As such, it is critical that we understand more fully not only how the processing of more than one language is subserved by the brain, but also how this understanding furthers our knowledge of the cognitive architecture that encapsulates the bilingual mental lexicon. The neurolinguistic approach to bilingualism focuses on determining the manner in which the two (or more) languages are stored in the brain and how they are differentially (or similarly) processed. The underlying assumption is that the acquisition of more than one language requires at the very least a change to or expansion of the existing lexicon, if not the formation of language-specific components, and this is likely to manifest in some way at the physiological level. There are many sources of information, ranging from data on bilingual aphasic patients (Paradis, 1977, 1985, 1997) to lateralization (Vaid, 1983; see Hull & Vaid, 2006, for a review), recordings of event-related potentials (ERPs) (e.g. Ardal et al., 1990; Phillips et al., 2006), and positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) studies of neurologically intact bilinguals (see Indefrey, 2006; Vaid & Hull, 2002, for reviews). Following the consideration of methodological issues and interpretative limitations that characterize these approaches, the chapter focuses on how the application of these approaches has furthered our understanding of (1) selectivity of bilingual lexical access, (2) distinctions between word types in the bilingual lexicon and (3) control processes that enable language selection.
Resumo:
Background The accurate measurement of Cardiac output (CO) is vital in guiding the treatment of critically ill patients. Invasive or minimally invasive measurement of CO is not without inherent risks to the patient. Skilled Intensive Care Unit (ICU) nursing staff are in an ideal position to assess changes in CO following therapeutic measures. The USCOM (Ultrasonic Cardiac Output Monitor) device is a non-invasive CO monitor whose clinical utility and ease of use requires testing. Objectives To compare cardiac output measurement using a non-invasive ultrasonic device (USCOM) operated by a non-echocardiograhically trained ICU Registered Nurse (RN), with the conventional pulmonary artery catheter (PAC) using both thermodilution and Fick methods. Design Prospective observational study. Setting and participants Between April 2006 and March 2007, we evaluated 30 spontaneously breathing patients requiring PAC for assessment of heart failure and/or pulmonary hypertension at a tertiary level cardiothoracic hospital. Methods SCOM CO was compared with thermodilution measurements via PAC and CO estimated using a modified Fick equation. This catheter was inserted by a medical officer, and all USCOM measurements by a senior ICU nurse. Mean values, bias and precision, and mean percentage difference between measures were determined to compare methods. The Intra-Class Correlation statistic was also used to assess agreement. The USCOM time to measure was recorded to assess the learning curve for USCOM use performed by an ICU RN and a line of best fit demonstrated to describe the operator learning curve. Results In 24 of 30 (80%) patients studied, CO measures were obtained. In 6 of 30 (20%) patients, an adequate USCOM signal was not achieved. The mean difference (±standard deviation) between USCOM and PAC, USCOM and Fick, and Fick and PAC CO were small, −0.34 ± 0.52 L/min, −0.33 ± 0.90 L/min and −0.25 ± 0.63 L/min respectively across a range of outputs from 2.6 L/min to 7.2 L/min. The percent limits of agreement (LOA) for all measures were −34.6% to 17.8% for USCOM and PAC, −49.8% to 34.1% for USCOM and Fick and −36.4% to 23.7% for PAC and Fick. Signal acquisition time reduced on average by 0.6 min per measure to less than 10 min at the end of the study. Conclusions In 80% of our cohort, USCOM, PAC and Fick measures of CO all showed clinically acceptable agreement and the learning curve for operation of the non-invasive USCOM device by an ICU RN was found to be satisfactorily short. Further work is required in patients receiving positive pressure ventilation.
Resumo:
Sleeper is an 18'00" musical work for live performer and laptop computer which exists as both a live performance work and a recorded work for audio CD. The work has been presented at a range of international performance events and survey exhibitions. These include the 2003 International Computer Music Conference (Singapore) where it was selected for CD publication, Variable Resistance (San Francisco Museum of Modern Art, USA), and i.audio, a survey of experimental sound at the Performance Space, Sydney. The source sound materials are drawn from field recordings made in acoustically resonant spaces in the Australian urban environment, amplified and acoustic instruments, radio signals, and sound synthesis procedures. The processing techniques blur the boundaries between, and exploit, the perceptual ambiguities of de-contextualised and processed sound. The work thus challenges the arbitrary distinctions between sound, noise and music and attempts to reveal the inherent musicality in so-called non-musical materials via digitally re-processed location audio. Thematically the work investigates Paul Virilio’s theory that technology ‘collapses space’ via the relationship of technology to speed. Technically this is explored through the design of a music composition process that draws upon spatially and temporally dispersed sound materials treated using digital audio processing technologies. One of the contributions to knowledge in this work is a demonstration of how disparate materials may be employed within a compositional process to produce music through the establishment of musically meaningful morphological, spectral and pitch relationships. This is achieved through the design of novel digital audio processing networks and a software performance interface. The work explores, tests and extends the music perception theories of ‘reduced listening’ (Schaeffer, 1967) and ‘surrogacy’ (Smalley, 1997), by demonstrating how, through specific audio processing techniques, sounds may shifted away from ‘causal’ listening contexts towards abstract aesthetic listening contexts. In doing so, it demonstrates how various time and frequency domain processing techniques may be used to achieve this shift.
Resumo:
Retinal image properties such as contrast and spatial frequency play important roles in the development of normal vision. For example, visual environments comprised solely of low contrast and/or low spatial frequencies induce myopia. The visual image is processed by the retina and it then locally controls eye growth. In terms of the retinal neurotransmitters that link visual stimuli to eye growth, there is strong evidence to suggest involvement of the retinal dopamine (DA) system. For example, effectively increasing retinal DA levels by using DA agonists can suppress the development of form-deprivation myopia (FDM). However, whether visual feedback controls eye growth by modulating retinal DA release, and/or some other factors, is still being elucidated. This thesis is chiefly concerned with the relationship between the dopaminergic system and retinal image properties in eye growth control. More specifically, whether the amount of retinal DA release reduces as the complexity of the image degrades was determined. For example, we investigated whether the level of retinal DA release decreased as image contrast decreased. In addition, the effects of spatial frequency, spatial energy distribution slope, and spatial phase on retinal DA release and eye growth were examined. When chicks were 8-days-old, a cone-lens imaging system was applied monocularly (+30 D, 3.3 cm cone). A short-term treatment period (6 hr) and a longer-term treatment period (4.5 days) were used. The short-term treatment tests for the acute reduction in DA release by the visual stimulus, as is seen with diffusers and lenses, whereas the 4.5 day point tests for reduction in DA release after more prolonged exposure to the visual stimulus. In the contrast study, 1.35 cyc/deg square wave grating targets of 95%, 67%, 45%, 12% or 4.2% contrast were used. Blank (0% contrast) targets were included for comparison. In the spatial frequency study, both sine and square wave grating targets with either 0.017 cyc/deg and 0.13 cyc/deg fundamental spatial frequencies and 95% contrast were used. In the spectral slope study, 30% root-mean-squared (RMS) contrast fractal noise targets with spectral fall-off of 1/f0.5, 1/f and 1/f2 were used. In the spatial alignment study, a structured Maltese cross (MX) target, a structured circular patterned (C) target and the scrambled versions of these two targets (SMX and SC) were used. Each treatment group comprised 6 chicks for ocular biometry (refraction and ocular dimension measurement) and 4 for analysis of retinal DA release. Vitreal dihydroxyphenylacetic acid (DOPAC) was analysed through ion-paired reversed phase high performance liquid chromatography with electrochemical detection (HPLC-ED), as a measure of retinal DA release. For the comparison between retinal DA release and eye growth, large reductions in retinal DA release possibly due to the decreased light level inside the cone-lens imaging system were observed across all treated eyes while only those exposed to low contrast, low spatial frequency sine wave grating, 1/f2, C and SC targets had myopic shifts in refraction. Amongst these treatment groups, no acute effect was observed and longer-term effects were only found in the low contrast and 1/f2 groups. These findings suggest that retinal DA release does not causally link visual stimuli properties to eye growth, and these target induced changes in refractive development are not dependent on the level of retinal DA release. Retinal dopaminergic cells might be affected indirectly via other retinal cells that immediately respond to changes in the image contrast of the retinal image.
Resumo:
The molecular and metal profile fingerprints were obtained from a complex substance, Atractylis chinensis DC—a traditional Chinese medicine (TCM), with the use of the high performance liquid chromatography (HPLC) and inductively coupled plasma atomic emission spectroscopy (ICP-AES) techniques. This substance was used in this work as an example of a complex biological material, which has found application as a TCM. Such TCM samples are traditionally processed by the Bran, Cut, Fried and Swill methods, and were collected from five provinces in China. The data matrices obtained from the two types of analysis produced two principal component biplots, which showed that the HPLC fingerprint data were discriminated on the basis of the methods for processing the raw TCM, while the metal analysis grouped according to the geographical origin. When the two data matrices were combined into a one two-way matrix, the resulting biplot showed a clear separation on the basis of the HPLC fingerprints. Importantly, within each different grouping the objects separated according to their geographical origin, and they ranked approximately in the same order in each group. This result suggested that by using such an approach, it is possible to derive improved characterisation of the complex TCM materials on the basis of the two kinds of analytical data. In addition, two supervised pattern recognition methods, K-nearest neighbors (KNNs) method, and linear discriminant analysis (LDA), were successfully applied to the individual data matrices—thus, supporting the PCA approach.
Resumo:
Interactions between small molecules with biopolymers e.g. the bovine serum albumin (BSA protein), are important, and significant information is recorded in the UV–vis and fluorescence spectra of their reaction mixtures. The extraction of this information is difficult conventionally and principally because there is significant overlapping of the spectra of the three analytes in the mixture. The interaction of berberine chloride (BC) and the BSA protein provides an interesting example of such complex systems. UV–vis and fluorescence spectra of BC and BSA mixtures were investigated in pH 7.4 Tris–HCl buffer at 37 °C. Two sample series were measured by each technique: (1) [BSA] was kept constant and the [BC] was varied and (2) [BC] was kept constant and the [BSA] was varied. This produced four spectral data matrices, which were combined into one expanded spectral matrix. This was processed by the multivariate curve resolution–alternating least squares method (MCR–ALS). The results produced: (1) the extracted pure BC, BSA and the BC–BSA complex spectra from the measured heavily overlapping composite responses, (2) the concentration profiles of BC, BSA and the BC–BSA complex, which are difficult to obtain by conventional means, and (3) estimates of the number of binding sites of BC.
Resumo:
Professional prac− tice guidelines for endoscope reprocessing re− commend reprocessing endoscopes between each case and proper storage following repro− cessing after the last case of the list. There is lim− ited empirical evidence to support the efficacy of endoscope reprocessing prior to use in the first case of the day; however, internationally, many guidelines continue to recommend this practice. The aim of this study is to estimate a safe shelf life for flexible endoscopes in a high−turnover gastroenterology unit. Materials and methods: In a prospective obser− vational study, all flexible endoscopes in active service during the 3−week study period were mi− crobiologically sampled prior to reprocessing be− fore the first case of the day (n = 200). The main outcome variables were culture status, organism cultured, and shelf life. Results: Among the total number of useable samples (n = 194), the overall contamination rate was 15.5 %, with a pathogenic contamination rate of 0.5 %. Mean time between last case one day and reprocessing before the first case on the next day (that is, shelf life) was 37.62 h (SD 36.47). Median shelf life was 18.8 h (range 5.27± 165.35 h). The most frequently identified organ− ism was coagulase−negative Staphylococcus, an environmental nonpathogenic organism. Conclusions: When processed according to es− tablished guidelines, flexible endoscopes remain free from pathogenic organisms between last case and next day first case use. Significant re− ductions in the expenditure of time and resources on reprocessing endoscopes have the potential to reduce the restraints experienced by high−turnover endoscopy units and improve ser− vice delivery.
Resumo:
This is an experimental study into the permeability and compressibility properties of bagasse pulp pads. Three experimental rigs were custom-built for this project. The experimental work is complemented by modelling work. Both the steady-state and dynamic behaviour of pulp pads are evaluated in the experimental and modelling components of this project. Bagasse, the fibrous residue that remains after sugar is extracted from sugarcane, is normally burnt in Australia to generate steam and electricity for the sugar factory. A study into bagasse pulp was motivated by the possibility of making highly value-added pulp products from bagasse for the financial benefit of sugarcane millers and growers. The bagasse pulp and paper industry is a multibillion dollar industry (1). Bagasse pulp could replace eucalypt pulp which is more widely used in the local production of paper products. An opportunity exists for replacing the large quantity of mainly generic paper products imported to Australia. This includes 949,000 tonnes of generic photocopier papers (2). The use of bagasse pulp for paper manufacture is the main application area of interest for this study. Bagasse contains a large quantity of short parenchyma cells called ‘pith’. Around 30% of the shortest fibres are removed from bagasse prior to pulping. Despite the ‘depithing’ operations in conventional bagasse pulp mills, a large amount of pith remains in the pulp. Amongst Australian paper producers there is a perception that the high quantity of short fibres in bagasse pulp leads to poor filtration behaviour at the wet-end of a paper machine. Bagasse pulp’s poor filtration behaviour reduces paper production rates and consequently revenue when compared to paper production using locally made eucalypt pulp. Pulp filtration can be characterised by two interacting factors; permeability and compressibility. Surprisingly, there has previously been very little rigorous investigation into neither bagasse pulp permeability nor compressibility. Only freeness testing of bagasse pulp has been published in the open literature. As a result, this study has focussed on a detailed investigation of the filtration properties of bagasse pulp pads. As part of this investigation, this study investigated three options for improving the permeability and compressibility properties of Australian bagasse pulp pads. Two options for further pre-treating depithed bagasse prior to pulping were considered. Firstly, bagasse was fractionated based on size. Two bagasse fractions were produced, ‘coarse’ and ‘medium’ bagasse fractions. Secondly, bagasse was collected after being processed on two types of juice extraction technology, i.e. from a sugar mill and from a sugar diffuser. Finally one method of post-treating the bagasse pulp was investigated. The effects of chemical additives, which are known to improve freeness, were also assessed for their effect on pulp pad permeability and compressibility. Pre-treated Australian bagasse pulp samples were compared with several benchmark pulp samples. A sample of commonly used kraft Eucalyptus globulus pulp was obtained. A sample of depithed Argentinean bagasse, which is used for commercial paper production, was also obtained. A sample of Australian bagasse which was depithed as per typical factory operations was also produced for benchmarking purposes. The steady-state pulp pad permeability and compressibility parameters were determined experimentally using two purpose-built experimental rigs. In reality, steady-state conditions do not exist on a paper machine. The permeability changes as the sheet compresses over time. Hence, a dynamic model was developed which uses the experimentally determined steady-state permeability and compressibility parameters as inputs. The filtration model was developed with a view to designing pulp processing equipment that is suitable specifically for bagasse pulp. The predicted results of the dynamic model were compared to experimental data. The effectiveness of a polymeric and microparticle chemical additives for improving the retention of short fibres and increasing the drainage rate of a bagasse pulp slurry was determined in a third purpose-built rig; a modified Dynamic Drainage Jar (DDJ). These chemical additives were then used in the making of a pulp pad, and their effect on the steady-state and dynamic permeability and compressibility of bagasse pulp pads was determined. The most important finding from this investigation was that Australian bagasse pulp was produced with higher permeability than eucalypt pulp, despite a higher overall content of short fibres. It is thought this research outcome could enable Australian paper producers to switch from eucalypt pulp to bagasse pulp without sacrificing paper machine productivity. It is thought that two factors contributed to the high permeability of the bagasse pulp pad. Firstly, thicker cell walls of the bagasse pulp fibres resulted in high fibre stiffness. Secondly, the bagasse pulp had a large proportion of fibres longer than 1.3 mm. These attributes helped to reinforce the pulp pad matrix. The steady-state permeability and compressibility parameters for the eucalypt pulp were consistent with those found by previous workers. It was also found that Australian pulp derived from the ‘coarse’ bagasse fraction had higher steady-state permeability than the ‘medium’ fraction. However, there was no difference between bagasse pulp originating from a diffuser or a mill. The bagasse pre-treatment options investigated in this study were not found to affect the steady-state compressibility parameters of a pulp pad. The dynamic filtration model was found to give predictions that were in good agreement with experimental data for pads made from samples of pretreated bagasse pulp, provided at least some pith was removed prior to pulping. Applying vacuum to a pulp slurry in the modified DDJ dramatically reduced the drainage time. At any level of vacuum, bagasse pulp benefitted from chemical additives as quantified by reduced drainage time and increased retention of short fibres. Using the modified DDJ, it was observed that under specific conditions, a benchmark depithed bagasse pulp drained more rapidly than the ‘coarse’ bagasse pulp. In steady-state permeability and compressibility experiments, the addition of chemical additives improved the pad permeability and compressibility of a benchmark bagasse pulp with a high quantity of short fibres. Importantly, this effect was not observed for the ‘coarse’ bagasse pulp. However, dynamic filtration experiments showed that there was also a small observable improvement in filtration for the ‘medium’ bagasse pulp. The mechanism of bagasse pulp pad consolidation appears to be by fibre realignment. Chemical additives assist to lubricate the consolidation process. This study was complemented by pulp physical and chemical property testing and a microscopy study. In addition to its high pulp pad permeability, ‘coarse’ bagasse pulp often (but not always) had superior physical properties than a benchmark depithed bagasse pulp.
Resumo:
Osteoporosis is a disease characterized by low bone mass and micro-architectural deterioration of bone tissue, with a consequent increase in bone fragility and susceptibility to fracture. Osteoporosis affects over 200 million people worldwide, with an estimated 1.5 million fractures annually in the United States alone, and with attendant costs exceeding $10 billion dollars per annum. Osteoporosis reduces bone density through a series of structural changes to the honeycomb-like trabecular bone structure (micro-structure). The reduced bone density, coupled with the microstructural changes, results in significant loss of bone strength and increased fracture risk. Vertebral compression fractures are the most common type of osteoporotic fracture and are associated with pain, increased thoracic curvature, reduced mobility, and difficulty with self care. Surgical interventions, such as kyphoplasty or vertebroplasty, are used to treat osteoporotic vertebral fractures by restoring vertebral stability and alleviating pain. These minimally invasive procedures involve injecting bone cement into the fractured vertebrae. The techniques are still relatively new and while initial results are promising, with the procedures relieving pain in 70-95% of cases, medium-term investigations are now indicating an increased risk of adjacent level fracture following the procedure. With the aging population, understanding and treatment of osteoporosis is an increasingly important public health issue in developed Western countries. The aim of this study was to investigate the biomechanics of spinal osteoporosis and osteoporotic vertebral compression fractures by developing multi-scale computational, Finite Element (FE) models of both healthy and osteoporotic vertebral bodies. The multi-scale approach included the overall vertebral body anatomy, as well as a detailed representation of the internal trabecular microstructure. This novel, multi-scale approach overcame limitations of previous investigations by allowing simultaneous investigation of the mechanics of the trabecular micro-structure as well as overall vertebral body mechanics. The models were used to simulate the progression of osteoporosis, the effect of different loading conditions on vertebral strength and stiffness, and the effects of vertebroplasty on vertebral and trabecular mechanics. The model development process began with the development of an individual trabecular strut model using 3D beam elements, which was used as the building block for lattice-type, structural trabecular bone models, which were in turn incorporated into the vertebral body models. At each stage of model development, model predictions were compared to analytical solutions and in-vitro data from existing literature. The incremental process provided confidence in the predictions of each model before incorporation into the overall vertebral body model. The trabecular bone model, vertebral body model and vertebroplasty models were validated against in-vitro data from a series of compression tests performed using human cadaveric vertebral bodies. Firstly, trabecular bone samples were acquired and morphological parameters for each sample were measured using high resolution micro-computed tomography (CT). Apparent mechanical properties for each sample were then determined using uni-axial compression tests. Bone tissue properties were inversely determined using voxel-based FE models based on the micro-CT data. Specimen specific trabecular bone models were developed and the predicted apparent stiffness and strength were compared to the experimentally measured apparent stiffness and strength of the corresponding specimen. Following the trabecular specimen tests, a series of 12 whole cadaveric vertebrae were then divided into treated and non-treated groups and vertebroplasty performed on the specimens of the treated group. The vertebrae in both groups underwent clinical-CT scanning and destructive uniaxial compression testing. Specimen specific FE vertebral body models were developed and the predicted mechanical response compared to the experimentally measured responses. The validation process demonstrated that the multi-scale FE models comprising a lattice network of beam elements were able to accurately capture the failure mechanics of trabecular bone; and a trabecular core represented with beam elements enclosed in a layer of shell elements to represent the cortical shell was able to adequately represent the failure mechanics of intact vertebral bodies with varying degrees of osteoporosis. Following model development and validation, the models were used to investigate the effects of progressive osteoporosis on vertebral body mechanics and trabecular bone mechanics. These simulations showed that overall failure of the osteoporotic vertebral body is initiated by failure of the trabecular core, and the failure mechanism of the trabeculae varies with the progression of osteoporosis; from tissue yield in healthy trabecular bone, to failure due to instability (buckling) in osteoporotic bone with its thinner trabecular struts. The mechanical response of the vertebral body under load is highly dependent on the ability of the endplates to deform to transmit the load to the underlying trabecular bone. The ability of the endplate to evenly transfer the load through the core diminishes with osteoporosis. Investigation into the effect of different loading conditions on the vertebral body found that, because the trabecular bone structural changes which occur in osteoporosis result in a structure that is highly aligned with the loading direction, the vertebral body is consequently less able to withstand non-uniform loading states such as occurs in forward flexion. Changes in vertebral body loading due to disc degeneration were simulated, but proved to have little effect on osteoporotic vertebra mechanics. Conversely, differences in vertebral body loading between simulated invivo (uniform endplate pressure) and in-vitro conditions (where the vertebral endplates are rigidly cemented) had a dramatic effect on the predicted vertebral mechanics. This investigation suggested that in-vitro loading using bone cement potting of both endplates has major limitations in its ability to represent vertebral body mechanics in-vivo. And lastly, FE investigation into the biomechanical effect of vertebroplasty was performed. The results of this investigation demonstrated that the effect of vertebroplasty on overall vertebra mechanics is strongly governed by the cement distribution achieved within the trabecular core. In agreement with a recent study, the models predicted that vertebroplasty cement distributions which do not form one continuous mass which contacts both endplates have little effect on vertebral body stiffness or strength. In summary, this work presents the development of a novel, multi-scale Finite Element model of the osteoporotic vertebral body, which provides a powerful new tool for investigating the mechanics of osteoporotic vertebral compression fractures at the trabecular bone micro-structural level, and at the vertebral body level.
Resumo:
This paper presents the preliminary results in establishing a strategy for predicting Zenith Tropospheric Delay (ZTD) and relative ZTD (rZTD) between Continuous Operating Reference Stations (CORS) in near real-time. It is anticipated that the predicted ZTD or rZTD can assist the network-based Real-Time Kinematic (RTK) performance over long inter-station distances, ultimately, enabling a cost effective method of delivering precise positioning services to sparsely populated regional areas, such as Queensland. This research firstly investigates two ZTD solutions: 1) the post-processed IGS ZTD solution and 2) the near Real-Time ZTD solution. The near Real-Time solution is obtained through the GNSS processing software package (Bernese) that has been deployed for this project. The predictability of the near Real-Time Bernese solution is analyzed and compared to the post-processed IGS solution where it acts as the benchmark solution. The predictability analyses were conducted with various prediction time of 15, 30, 45, and 60 minutes to determine the error with respect to timeliness. The predictability of ZTD and relative ZTD is determined (or characterized) by using the previously estimated ZTD as the predicted ZTD of current epoch. This research has shown that both the ZTD and relative ZTD predicted errors are random in nature; the STD grows from a few millimeters to sub-centimeters while the predicted delay interval ranges from 15 to 60 minutes. Additionally, the RZTD predictability shows very little dependency on the length of tested baselines of up to 1000 kilometers. Finally, the comparison of near Real-Time Bernese solution with IGS solution has shown a slight degradation in the prediction accuracy. The less accurate NRT solution has an STD error of 1cm within the delay of 50 minutes. However, some larger errors of up to 10cm are observed.
Resumo:
Purpose: Computer vision has been widely used in the inspection of electronic components. This paper proposes a computer vision system for the automatic detection, localisation, and segmentation of solder joints on Printed Circuit Boards (PCBs) under different illumination conditions. Design/methodology/approach: An illumination normalization approach is applied to an image, which can effectively and efficiently eliminate the effect of uneven illumination while keeping the properties of the processed image the same as in the corresponding image under normal lighting conditions. Consequently special lighting and instrumental setup can be reduced in order to detect solder joints. These normalised images are insensitive to illumination variations and are used for the subsequent solder joint detection stages. In the segmentation approach, the PCB image is transformed from an RGB color space to a YIQ color space for the effective detection of solder joints from the background. Findings: The segmentation results show that the proposed approach improves the performance significantly for images under varying illumination conditions. Research limitations/implications: This paper proposes a front-end system for the automatic detection, localisation, and segmentation of solder joint defects. Further research is required to complete the full system including the classification of solder joint defects. Practical implications: The methodology presented in this paper can be an effective method to reduce cost and improve quality in production of PCBs in the manufacturing industry. Originality/value: This research proposes the automatic location, identification and segmentation of solder joints under different illumination conditions.
Resumo:
Performance evaluation of object tracking systems is typically performed after the data has been processed, by comparing tracking results to ground truth. Whilst this approach is fine when performing offline testing, it does not allow for real-time analysis of the systems performance, which may be of use for live systems to either automatically tune the system or report reliability. In this paper, we propose three metrics that can be used to dynamically asses the performance of an object tracking system. Outputs and results from various stages in the tracking system are used to obtain measures that indicate the performance of motion segmentation, object detection and object matching. The proposed dynamic metrics are shown to accurately indicate tracking errors when visually comparing metric results to tracking output, and are shown to display similar trends to the ETISEO metrics when comparing different tracking configurations.
Resumo:
The purpose of this chapter is to describe the use of caricatured contrasting scenarios (Bødker, 2000) and how they can be used to consider potential designs for disruptive technologies. The disruptive technology in this case is Automatic Speech Recognition (ASR) software in workplace settings. The particular workplace is the Magistrates Court of the Australian Capital Territory.----- Caricatured contrasting scenarios are ideally suited to exploring how ASR might be implemented in a particular setting because they allow potential implementations to be “sketched” quickly and with little effort. This sketching of potential interactions and the emphasis of both positive and negative outcomes allows the benefits and pitfalls of design decisions to become apparent.----- A brief description of the Court is given, describing the reasons for choosing the Court for this case study. The work of the Court is framed as taking place in two modes: Front of house, where the courtroom itself is, and backstage, where documents are processed and the business of the court is recorded and encoded into various systems.----- Caricatured contrasting scenarios describing the introduction of ASR to the front of house are presented and then analysed. These scenarios show that the introduction of ASR to the court would be highly problematic.----- The final section describes how ASR could be re-imagined in order to make it useful for the court. A final scenario is presented that describes how this re-imagined ASR could be integrated into both the front of house and backstage of the court in a way that could strengthen both processes.
Resumo:
The focus of this thesis is discretionary work effort, that is, work effort that is voluntary, is above and beyond what is minimally required or normally expected to avoid reprimand or dismissal, and is organisationally functional. Discretionary work effort is an important construct because it is known to affect individual performance as well as organisational efficiency and effectiveness. To optimise organisational performance and ensure their long term competitiveness and sustainability, firms need to be able to induce their employees to work at or near their peak level. To work at or near their peak level, individuals must be willing to supply discretionary work effort. Thus, managers need to understand the determinants of discretionary work effort. Nonetheless, despite many years of scholarly investigation across multiple disciplines, considerable debate still exists concerning why some individuals supply only minimal work effort whilst others expend effort well above and beyond what is minimally required of them (Le. they supply discretionary work effort). Even though it is well recognised that discretionary work effort is important for promoting organisational performance and effectiveness, many authors claim that too little is being done by managers to increase the discretionary work effort of their employees. In this research, I have adopted a multi-disciplinary approach towards investigating the role of monetary and non-monetary work environment characteristics in determining discretionary work effort. My central research questions were "What non-monetary work environment characteristics do employees perceive as perks (perquisites) and irks (irksome work environment characteristics)?" and "How do perks, irks and monetary rewards relate to an employee's level of discretionary work effort?" My research took a unique approach in addressing these research questions. By bringing together the economics and organisational behaviour (OB) literatures, I identified problems with the current definition and conceptualisations of the discretionary work effort construct. I then developed and empirically tested a more concise and theoretically-based definition and conceptualisation of this construct. In doing so, I disaggregated discretionary work effort to include three facets - time, intensity and direction - and empirically assessed if different classes of work environment characteristics have a differential pattern of relationships with these facets. This analysis involved a new application of a multi-disciplinary framework of human behaviour as a tool for classifying work environment characteristics and the facets of discretionary work effort. To test my model of discretionary work effort, I used a public sector context in which there has been limited systematic empirical research into work motivation. The program of research undertaken involved three separate but interrelated studies using mixed methods. Data on perks, irks, monetary rewards and discretionary work effort were gathered from employees in 12 organisations in the local government sector in Western Australia. Non-monetary work environment characteristics that should be associated with discretionary work effort were initially identified through a review of the literature. Then, a qualitative study explored what work behaviours public sector employees perceive as discretionary and what perks and irks were associated with high and low levels of discretionary work effort. Next, a quantitative study developed measures of these perks and irks. A Q-sorttype procedure and exploratory factor analysis were used to develop the perks and irks measures. Finally, a second quantitative study tested the relationships amongst perks, irks, monetary rewards and discretionary work effort. Confirmatory factor analysis was firstly used to confirm the factor structure of the measurement models. Correlation analysis, regression analysis and effect-size correlation analysis were used to test the hypothesised relationships in the proposed model of discretionary work effort. The findings confirmed five hypothesised non-monetary work environment characteristics as common perks and two of three hypothesised non-monetary work environment characteristics as common irks. Importantly, they showed that perks, irks and monetary rewards are differentially related to the different facets of discretionary work effort. The convergent and discriminant validities of the perks and irks constructs as well as the time, intensity and direction facets of discretionary work effort were generally confirmed by the research findings. This research advances the literature in several ways: (i) it draws on the Economics and OB literatures to redefine and reconceptualise the discretionary work effort construct to provide greater definitional clarity and a more complete conceptualisation of this important construct; (ii) it builds on prior research to create a more comprehensive set of perks and irks for which measures are developed; (iii) it develops and empirically tests a new motivational model of discretionary work effort that enhances our understanding of the nature and functioning of perks and irks and advances our ability to predict discretionary work effort; and (iv) it fills a substantial gap in the literature on public sector work motivation by revealing what work behaviours public sector employees perceive as discretionary and what work environment characteristics are associated with their supply of discretionary work effort. Importantly, by disaggregating discretionary work effort this research provides greater detail on how perks, irks and monetary rewards are related to the different facets of discretionary work effort. Thus, from a theoretical perspective this research also demonstrates the conceptual meaningfulness and empirical utility of investigating the different facets of discretionary work effort separately. From a practical perspective, identifying work environment factors that are associated with discretionary work effort enhances managers' capacity to tap this valuable resource. This research indicates that to maximise the potential of their human resources, managers need to address perks, irks and monetary rewards. It suggests three different mechanisms through which managers might influence discretionary work effort and points to the importance of training for both managers and non-managers in cultivating positive interpersonal relationships.
Resumo:
In rapidly changing environments, organisations require dynamic capabilities to integrate, build and reconfigure resources and competencies to achieve continuous innovation. Although tangible resources are important to promoting the firm’s ability to act, capabilities fundamentally rest in the knowledge created and accumulated by the firm through human capital, organisational routines, processes, practices and norms. The exploration for new ideas, technologies and knowledge – to one side – and – on the other one – the exploitation of existing and new knowledge is essential for continuous innovation. Firms need to decide how best to allocate their scarce resources for both activities and at the same time build dynamic capabilities to keep up with changing market conditions. This in turn, is influenced by the absorptive capacity of the firm to assimilate knowledge. This paper presents a case study that investigates the sources of knowledge in an engineering firm in Australia, and how it is organised and processed. As information pervades the firm from both internal and external sources, individuals integrate knowledge using both exploration and exploitation approaches. The findings illustrate that absorptive capacity can encourage greater leverage for exploration potential leading to radical innovation; and reconfiguring exploitable knowledge for incremental improvements. This study provides an insight for managers in quest of improving knowledge strategies and continuous innovation. It also makes significant theoretical contributions to the literature through extending the concepts of