919 resultados para High-Order Accuracy
Resumo:
This paper looks at the accuracy of using the built-in camera of smart phones and free software as an economical way to quantify and analyse light exposure by producing luminance maps from High Dynamic Range (HDR) images. HDR images were captured with an Apple iPhone 4S to capture a wide variation of luminance within an indoor and outdoor scene. The HDR images were then processed using Photosphere software (Ward, 2010.) to produce luminance maps, where individual pixel values were compared with calibrated luminance meter readings. This comparison has shown an average luminance error of ~8% between the HDR image pixel values and luminance meter readings, when the range of luminances in the image is limited to approximately 1,500cd/m2.
Resumo:
Emerging sciences, such as conceptual cost estimating, seem to have to go through two phases. The first phase involves reducing the field of study down to its basic ingredients - from systems development to technological development (techniques) to theoretical development. The second phase operates in the direction in building up techniques from theories, and systems from techniques. Cost estimating is clearly and distinctly still in the first phase. A great deal of effort has been put into the development of both manual and computer based cost estimating systems during this first phase and, to a lesser extent, the development of a range of techniques that can be used (see, for instance, Ashworth & Skitmore, 1986). Theoretical developments have not, as yet, been forthcoming. All theories need the support of some observational data and cost estimating is not likely to be an exception. These data do not need to be complete in order to build theories. As it is possible to construct an image of a prehistoric animal such as the brontosaurus from only a few key bones and relics, so a theory of cost estimating may possibly be found on a few factual details. The eternal argument of empiricists and deductionists is that, as theories need factual support, so do we need theories in order to know what facts to collect. In cost estimating, the basic facts of interest concern accuracy, the cost of achieving this accuracy, and the trade off between the two. When cost estimating theories do begin to emerge, it is highly likely that these relationships will be central features. This paper presents some of the facts we have been able to acquire regarding one part of this relationship - accuracy, and its influencing factors. Although some of these factors, such as the amount of information used in preparing the estimate, will have cost consequences, we have not yet reached the stage of quantifying these costs. Indeed, as will be seen, many of the factors do not involve any substantial cost considerations. The absence of any theory is reflected in the arbitrary manner in which the factors are presented. Rather, the emphasis here is on the consideration of purely empirical data concerning estimating accuracy. The essence of good empirical research is to .minimize the role of the researcher in interpreting the results of the study. Whilst space does not allow a full treatment of the material in this manner, the principle has been adopted as closely as possible to present results in an uncleaned and unbiased way. In most cases the evidence speaks for itself. The first part of the paper reviews most of the empirical evidence that we have located to date. Knowledge of any work done, but omitted here would be most welcome. The second part of the paper presents an analysis of some recently acquired data pertaining to this growing subject.
Resumo:
Fractional reaction–subdiffusion equations are widely used in recent years to simulate physical phenomena. In this paper, we consider a variable-order nonlinear reaction–subdiffusion equation. A numerical approximation method is proposed to solve the equation. Its convergence and stability are analyzed by Fourier analysis. By means of the technique for improving temporal accuracy, we also propose an improved numerical approximation. Finally, the effectiveness of the theoretical results is demonstrated by numerical examples.
Resumo:
This paper presents an experimental study on the effect of presoaked lightweight aggregates (LWAs) for internal curing on water permeability, water absorption and resistance of concrete to chloride-ion penetration in comparison with those of a control concrete and a concrete with shrinkage reducing admixture (SRA) of similar water/cement ratios (w/c). In general, the concretes with LWA particles had initial water absorption, sorptivity and water permeability similar to or lower than those of the control concrete and the concrete with SRA. The charges passed, chloride migration coefficient and chloride diffusion coefficient of such concretes were in the same order as those of the control concrete and the concrete with SRA. However, the incorporation of the LWAs for internal curing reduced unit weight, compressive strength and elastic modulus of the concrete. Comparing the LWAs of different sizes for internal curing, finer particles were more efficient in reducing the shrinkage and generally resulted in less reduction in the unit weight, compressive strength, and elastic modulus. However, the increase in the more porous crushed LW particles in concrete seems to increase the penetration of chloride ions in the concrete. The concrete with SRA had initial water absorption, sorptivity, water permeability and resistance to chloride ion penetration comparable with those of the control concrete. The use of SRA in concrete does not affect the elastic modulus of the concrete, except for a minor influence on the compressive strength of the concrete.
Resumo:
The objective of this PhD research program is to investigate numerical methods for simulating variably-saturated flow and sea water intrusion in coastal aquifers in a high-performance computing environment. The work is divided into three overlapping tasks: to develop an accurate and stable finite volume discretisation and numerical solution strategy for the variably-saturated flow and salt transport equations; to implement the chosen approach in a high performance computing environment that may have multiple GPUs or CPU cores; and to verify and test the implementation. The geological description of aquifers is often complex, with porous materials possessing highly variable properties, that are best described using unstructured meshes. The finite volume method is a popular method for the solution of the conservation laws that describe sea water intrusion, and is well-suited to unstructured meshes. In this work we apply a control volume-finite element (CV-FE) method to an extension of a recently proposed formulation (Kees and Miller, 2002) for variably saturated groundwater flow. The CV-FE method evaluates fluxes at points where material properties and gradients in pressure and concentration are consistently defined, making it both suitable for heterogeneous media and mass conservative. Using the method of lines, the CV-FE discretisation gives a set of differential algebraic equations (DAEs) amenable to solution using higher-order implicit solvers. Heterogeneous computer systems that use a combination of computational hardware such as CPUs and GPUs, are attractive for scientific computing due to the potential advantages offered by GPUs for accelerating data-parallel operations. We present a C++ library that implements data-parallel methods on both CPU and GPUs. The finite volume discretisation is expressed in terms of these data-parallel operations, which gives an efficient implementation of the nonlinear residual function. This makes the implicit solution of the DAE system possible on the GPU, because the inexact Newton-Krylov method used by the implicit time stepping scheme can approximate the action of a matrix on a vector using residual evaluations. We also propose preconditioning strategies that are amenable to GPU implementation, so that all computationally-intensive aspects of the implicit time stepping scheme are implemented on the GPU. Results are presented that demonstrate the efficiency and accuracy of the proposed numeric methods and formulation. The formulation offers excellent conservation of mass, and higher-order temporal integration increases both numeric efficiency and accuracy of the solutions. Flux limiting produces accurate, oscillation-free solutions on coarse meshes, where much finer meshes are required to obtain solutions with equivalent accuracy using upstream weighting. The computational efficiency of the software is investigated using CPUs and GPUs on a high-performance workstation. The GPU version offers considerable speedup over the CPU version, with one GPU giving speedup factor of 3 over the eight-core CPU implementation.
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
Resumo:
Introduction: The motivation for developing megavoltage (and kilovoltage) cone beam CT (MV CBCT) capabilities in the radiotherapy treatment room was primarily based on the need to improve patient set-up accuracy. There has recently been an interest in using the cone beam CT data for treatment planning. Accurate treatment planning, however, requires knowledge of the electron density of the tissues receiving radiation in order to calculate dose distributions. This is obtained from CT, utilising a conversion between CT number and electron density of various tissues. The use of MV CBCT has particular advantages compared to treatment planning with kilovoltage CT in the presence of high atomic number materials and requires the conversion of pixel values from the image sets to electron density. Therefore, a study was undertaken to characterise the pixel value to electron density relationship for the Siemens MV CBCT system, MVision, and determine the effect, if any, of differing the number of monitor units used for acquisition. If a significant difference with number of monitor units was seen then pixel value to ED conversions may be required for each of the clinical settings. The calibration of the MV CT images for electron density offers the possibility for a daily recalculation of the dose distribution and the introduction of new adaptive radiotherapy treatment strategies. Methods: A Gammex Electron Density CT Phantom was imaged with the MVCB CT system. The pixel value for each of the sixteen inserts, which ranged from 0.292 to 1.707 relative electron density to the background solid water, was determined by taking the mean value from within a region of interest centred on the insert, over 5 slices within the centre of the phantom. These results were averaged and plotted against the relative electron densities of each insert with a linear least squares fit was preformed. This procedure was performed for images acquired with 5, 8, 15 and 60 monitor units. Results: The linear relationship between MVCT pixel value and ED was demonstrated for all monitor unit settings and over a range of electron densities. The number of monitor units utilised was found to have no significant impact on this relationship. Discussion: It was found that the number of MU utilised does not significantly alter the pixel value obtained for different ED materials. However, to ensure the most accurate and reproducible MV to ED calibration, one MU setting should be chosen and used routinely. To ensure accuracy for the clinical situation this MU setting should correspond to that which is used clinically. If more than one MU setting is used clinically then an average of the CT values acquired with different numbers of MU could be utilized without loss in accuracy. Conclusions: No significant differences have been shown between the pixel value to ED conversion for the Siemens MV CT cone beam unit with change in monitor units. Thus as single conversion curve could be utilised for MV CT treatment planning. To fully utilise MV CT imaging for radiotherapy treatment planning further work will be undertaken to ensure all corrections have been made and dose calculations verified. These dose calculations may be either for treatment planning purposes or for reconstructing the delivered dose distribution from transit dosimetry measurements made using electronic portal imaging devices. This will potentially allow the cumulative dose distribution to be determined through the patient’s multi-fraction treatment and adaptive treatment strategies developed to optimize the tumour response.
Resumo:
The QUT Outdoor Worker Sun Protection (OWSP) project undertook a comprehensive applied health promotion project to demonstrate the effectiveness of sun protection measures which influence high risk outdoor workers in Queensland to adopt sun safe behaviours. The three year project (2010-2013) was driven by two key concepts: 1) The hierarchy of control, which is used to address risks in the workplace, advocates for six control measures that need to be considered in order of priority (refer to Section 3.4.2); and 2) the Ottawa Charter which recommends five action means to achieve health promotion (refer to Section 2.1). The project framework was underpinned by a participatory action research approach that valued peoples’ input, took advantage of existing skills and resources, and stimulated innovation (refer to Section 4.2). Fourteen workplaces (small and large) with a majority outdoor workforce were recruited across regional Queensland (Darling Downs, Northwest, Mackay and Cairns) from four industries types: 1) building and construction, 2) rural and farming, 3) local government, and 4) public sector. A workplace champion was identified at each workplace and was supported (through resource provision, regular contact and site visits) over a 14 to 18 month intervention period to make sun safety a priority in their workplace. Employees and employers were independently assessed for pre- and postintervention sun protection behaviours. As part of the intervention, an individualised sun safety action plan was developed in conjunction with each workplace to guide changes across six key strategy areas including: 1) Policy (e.g., adopt sun safety practices during all company events); 2) Structural and environmental (e.g., shade on worksites; eliminate or minimise reflective surfaces); 3) Personal protective equipment (PPE) (e.g., trial different types of sunscreens, or wide-brimmed hats); 4) Education and awareness (e.g., include sun safety in inductions and toolbox talks; send reminder emails or text messages to workers);5) Role modelling (e.g., by managers, supervisors, workplace champions and mentors); and 6) Skin examinations (e.g., allow time off work for skin checks). The participatory action process revealed that there was no “one size fits all” approach to sun safety in the workplace; a comprehensive, tailored approach was fundamental. This included providing workplaces with information, resources, skills, know how, incentives and practical help. For example, workplaces engaged in farming complete differing seasonal tasks across the year and needed to prepare for optimal sun safety of their workers during less labour intensive times. In some construction workplaces, long pants were considered a trip hazard and could not be used as part of a PPE strategy. Culture change was difficult to achieve and workplace champions needed guidance on the steps to facilitate this (e.g., influencing leaders through peer support, mentoring and role modelling). With the assistance of the project team the majority of workplaces were able to successfully implement the sun safety strategies contained within their action plans, up skilling them in the evidence for sun safety, how to overcome barriers, how to negotiate with all relevant parties and assess success. The most important enablers to the implementation of a successful action plan were a pro-active workplace champion, strong employee engagement, supportive management, the use of highly visual educational resources, and external support (provided by the project team through regular contact either directly through phone calls or indirectly through emails and e-newsletters). Identified barriers included a lack of time, the multiple roles of workplace champions, (especially among smaller workplaces), competing issues leading to a lack of priority for sun safety, the culture of outdoor workers, and costs or budgeting constraints. The level of sun safety awareness, knowledge, and sun protective behaviours reported by the workers increased between pre-and post-intervention. Of the nine sun protective behaviours that were assessed, the largest changes reported included a 26% increase in workers who “usually or always” wore a broad-brimmed hat, a 20% increase in the use of natural shade, a 19% increase in workers wearing long-sleeved collared shirts, and a 16% increase in workers wearing long trousers.
Resumo:
Children with Autism Spectrum Disorder experience difficulty in communication and in understanding the social world which can have negative consequences for their relationships, in managing emotions, and generally dealing with the challenges of everyday life. This thesis examines the effectiveness of the Active and Reflective components of the Get REAL program through the assessment of the detailed coding of video-recorded observations and longitudinal quantitative analysis. The aim of Get REAL is to increase the social, emotional, and cognitive learning of children with High Functioning Autism (HFA). Get REAL is a group program designed specifically for use in inclusive primary school settings. The Get REAL program was designed in response to the mixed success of generalisation of learning to new contexts of existing social skills programs. The theoretical foundation of Get REAL is based upon pedagogical theory and learning theory to facilitate transfer of learning, combined with experiential, individualised, evaluative and organisational approaches. This thesis is by publication and consists of four refereed journal papers; 1 accepted for publication and 3 that are under review. Paper 1 describes the development and theoretical basis of the Get REAL program and provides detail of the program structure and learning cycle. The focus of Paper 1 reflects the first question of interest in the thesis which is about the extent to which learning derived from participation in the program can be generalised to other contexts. Participants are 16 children with HFA ranging in age from 8-13 years. Results provided support for the generalisability of learning from Get REAL to home and school evidenced by parent and teacher data collected pre and post participation in Get REAL. Following establishment of the generalisation of learning from Get REAL, Papers 2 and 3 focus on the Active and Reflective components of the program in order to examine how individual and group learning takes place. Participants (N = 12) in the program are video-taped during the Active and Reflective Sessions. Using identical coding protocols of video data, improvements in prosocial behaviour and diminishing of inappropriate behaviours were apparent with the exception of perspective taking. Data also revealed that 2 of the participants had atypical trajectories. An in-depth case study analysis was then conducted with these 2 participants in Paper 4. Data included reports from health care and education professionals within the school and externally (e.g., paediatrician) and identified the multi-faceted nature of care needed for children with comorbid diagnoses and extremely challenging family circumstances as a complex task to effect change. Results of this research support the effectiveness of the Get REAL program in promoting pro social behaviours such as improvements in engaging with others and emotional regulation, and in diminishing unwanted behaviours such as conduct problems. Further, the gains made by the participating children were found to be generalisable beyond Get REAL to home and other school settings. The research contained in the thesis adds to current knowledge about how learning can take place for children with HFA. Results show that an experiential learning framework with a focus on social cognition, together with explicit teaching, scaffolded with video feedback, are key ingredients for the generalisation of social learning to broader contexts.
Resumo:
Non-periodic structural variation has been found in the high Tc cuprates, YBa2Cu3O7-x and Hg0.67Pb0.33Ba2Ca2Cu 3O8+δ, by image analysis of high resolution transmission electron microscope (HRTEM) images. We use two methods for analysis of the HRTEM images. The first method is a means for measuring the bending of lattice fringes at twin planes. The second method is a low-pass filter technique which enhances information contained by diffuse-scattered electrons and reveals what appears to be an interference effect between domains of differing lattice parameter in the top and bottom of the thin foil. We believe that these methods of image analysis could be usefully applied to the many thousands of HRTEM images that have been collected by other workers in the high temperature superconductor field. This work provides direct structural evidence for phase separation in high Tc cuprates, and gives support to recent stripes models that have been proposed to explain various angle resolved photoelectron spectroscopy and nuclear magnetic resonance data. We believe that the structural variation is a response to an opening of an electronic solubility gap where holes are not uniformly distributed in the material but are confined to metallic stripes. Optimum doping may occur as a consequence of the diffuse boundaries between stripes which arise from spinodal decomposition. Theoretical ideas about the high Tc cuprates which treat the cuprates as homogeneous may need to be modified in order to take account of this type of structural variation.
Resumo:
Individual variability in the acquisition, consolidation and extinction of conditioned fear potentially contributes to the development of fear pathology including posttraumatic stress disorder (PTSD). Pavlovian fear conditioning is a key tool for the study of fundamental aspects of fear learning. Here, we used a selected mouse line of High and Low Pavlovian conditioned fear created from an advanced intercrossed line (AIL) in order to begin to identify the cellular basis of phenotypic divergence in Pavlovian fear conditioning. We investigated whether phosphorylated MAPK (p44/42 ERK/MAPK), a protein kinase required in the amygdala for the acquisition and consolidation of Pavlovian fear memory, is differentially expressed following Pavlovian fear learning in the High and Low fear lines. We found that following Pavlovian auditory fear conditioning, High and Low line mice differ in the number of pMAPK-expressing neurons in the dorsal sub nucleus of the lateral amygdala (LAd). In contrast, this difference was not detected in the ventral medial (LAvm) or ventral lateral (LAvl) amygdala sub nuclei or in control animals. We propose that this apparent increase in plasticity at a known locus of fear memory acquisition and consolidation relates to intrinsic differences between the two fear phenotypes. These data provide important insights into the micronetwork mechanisms encoding phenotypic differences in fear. Understanding the circuit level cellular and molecular mechanisms that underlie individual variability in fear learning is critical for the development of effective treatment of fear-related illnesses such as PTSD.
Resumo:
Aim: To explore weight status perception and its relation to actual weight status in a contemporary cohort of 5- to 17-year-old children and adolescents. Methods: Body mass index (BMI), derived from height and weight measurements, and perception of weight status (‘too thin’, ‘about right’ and ‘too fat’) were evaluated in 3043 participants from the Healthy Kids Queensland Survey. In children less than 12 years of age, weight status perception was obtained from the parents, whereas the adolescents self-reported their perceived weight status. Results: Compared with measured weight status by established BMI cut-offs, just over 20% of parents underestimated their child's weight status and only 1% overestimated. Adolescent boys were more likely to underestimate their weight status compared with girls (26.4% vs. 10.2%, P < 0.05) whereas adolescent girls were more likely to overestimate than underestimate (11.8% vs. 3.4%, P < 0.05). Underestimation was greater by parents of overweight children compared with those of obese children, but still less than 50% of parents identified their obese child as ‘too fat’. There was greater recognition of overweight status in the adolescents, with 83% of those who were obese reporting they were ‘too fat’. Conclusion: Whilst there was a high degree of accuracy of weight status perception in those of healthy weight, there was considerable underestimation of weight status, particularly by parents of children who were overweight or obese. Strategies are required that enable parents to identify what a healthy weight looks like and help them understand when intervention is needed to prevent further weight gain as the child gets older.
Resumo:
In many active noise control (ANC) applications, an online secondary path modelling method that uses a white noise as a training signal is required. This paper proposes a new feedback ANC system. Here we modified both the FxLMS and the VSS-LMS algorithms to raised noise attenuation and modelling accuracy for the overall system. The proposed algorithm stops injection of the white noise at the optimum point and reactivate the injection during the operation, if needed, to maintain performance of the system. Preventing continuous injection of the white noise increases the performance of the proposed method significantly and makes it more desirable for practical ANC systems. Computer simulation results shown in this paper indicate effectiveness of the proposed method.
Resumo:
This paper presents a higher-order beam-column formulation that can capture the geometrically non-linear behaviour of steel framed structures which contain a multiplicity of slender members. Despite advances in computational frame software, analyses of large frames can still be problematic from a numerical standpoint and so the intent of the paper is to fulfil a need for versatile, reliable and efficient non-linear analysis of general steel framed structures with very many members. Following a comprehensive review of numerical frame analysis techniques, a fourth-order element is derived and implemented in an updated Lagrangian formulation, and it is able to predict flexural buckling, snap-through buckling and large displacement post-buckling behaviour of typical structures whose responses have been reported by independent researchers. The solutions are shown to be efficacious in terms of a balance of accuracy and computational expediency. The higher-order element forms a basis for augmenting the geometrically non-linear approach with material non-linearity through the refined plastic hinge methodology described in the companion paper.
Resumo:
In the companion paper, a fourth-order element formulation in an updated Lagrangian formulation was presented to handle geometric non-linearities. The formulation of the present paper extends this to include material non-linearity by proposing a refined plastic hinge approach to analyse large steel framed structures with many members, for which contemporary algorithms based on the plastic zone approach can be problematic computationally. This concept is an advancement of conventional plastic hinge approaches, as the refined plastic hinge technique allows for gradual yielding, being recognized as distributed plasticity across the element section, a condition of full plasticity, as well as including strain hardening. It is founded on interaction yield surfaces specified analytically in terms of force resultants, and achieves accurate and rapid convergence for large frames for which geometric and material non-linearity are significant. The solutions are shown to be efficacious in terms of a balance of accuracy and computational expediency. In addition to the numerical efficiency, the present versatile approach is able to capture different kinds of material and geometric non-linearities on general applications of steel structures, and thereby it offers an efficacious and accurate means of assessing non-linear behaviour of the structures for engineering practice.