964 resultados para Roundness errors


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Antigen selection of B cells within the germinal center reaction generally leads to the accumulation of replacement mutations in the complementarity-determining regions (CDRs) of immunoglobulin genes. Studies of mutations in IgE-associated VDJ gene sequences have cast doubt on the role of antigen selection in the evolution of the human IgE response, and it may be that selection for high affinity antibodies is a feature of some but not all allergic diseases. The severity of IgE-mediated anaphylaxis is such that it could result from higher affinity IgE antibodies. We therefore investigated IGHV mutations in IgE-associated sequences derived from ten individuals with a history of anaphylactic reactions to bee or wasp venom or peanut allergens. IgG sequences, which more certainly experience antigen selection, served as a control dataset. A total of 6025 unique IgE and 5396 unique IgG sequences were generated using high throughput 454 pyrosequencing. The proportion of replacement mutations seen in the CDRs of the IgG dataset was significantly higher than that of the IgE dataset, and the IgE sequences showed little evidence of antigen selection. To exclude the possibility that 454 errors had compromised analysis, rigorous filtering of the datasets led to datasets of 90 core IgE sequences and 411 IgG sequences. These sequences were present as both forward and reverse reads, and so were most unlikely to include sequencing errors. The filtered datasets confirmed that antigen selection plays a greater role in the evolution of IgG sequences than of IgE sequences derived from the study participants.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cost estimating is a key task within Quantity Surveyors’ (QS) offices. Provision of an accurate estimate is vital to ensure that the objectives of the client are met by staying within the client’s budget. Building Information Modelling (BIM) is an evolving technology that has gained attention in the construction industries all over the world. Benefits from the use of BIM include cost and time savings if the processes used by the procurement team are adapted to maximise the benefits of BIM. BIM can be used by QSs to automate aspects of quantity take-off and the preparation of estimates, decreasing turnaround time and assist in controlling errors and inaccuracies. The Malaysian government has decided to require the use of BIM for its projects beginning from 2016. However, slow uptake is reported in the use of BIM both within companies and to support collaboration within the Malaysian industry. It has been recommended that QSs to start evaluating the impact of BIM on their practices. This paper reviews the perspectives of QSs in Malaysia towards the use of BIM to achieve more dependable results in their cost estimating practice. The objectives of this paper include identifying strategies in improving practice and potential adoption drivers that lead QSs to BIM usage in their construction projects. From the expert interviews, it was found out that, despite still using traditional methods and not practising BIM, the interviewees still acquire limited knowledge related to BIM. There are some drivers that potentially motivate them to employ BIM in their practices. These include client demands, innovation in traditional methods, speed in estimating costs, reduced time and costs, improvement in practices and self-awareness, efficiency in projects, and competition from other companies. The findings of this paper identify the potential drivers in encouraging Malaysian Quantity Surveyors to exploit BIM in their construction projects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – Ideally, there is no wear in hydrodynamic lubrication regime. A small amount of wear occurs during start and stop of the machines and the amount of wear is so small that it is difficult to measure with accuracy. Various wear measuring techniques have been used where out-of-roundness was found to be the most reliable method of measuring small wear quantities in journal bearings. This technique was further developed to achieve higher accuracy in measuring small wear quantities. The method proved to be reliable as well as inexpensive. The paper aims to discuss these issues. Design/methodology/approach – In an experimental study, the effect of antiwear additives was studied on journal bearings lubricated with oil containing solid contaminants. The test duration was too long and the wear quantities achieved were too small. To minimise the test duration, short tests of about 90 min duration were conducted and wear was measured recording changes in variety of parameters related to weight, geometry and wear debris. The out-of-roundness was found to be the most effective method. This method was further refined by enlarging the out-of-roundness traces on a photocopier. The method was proved to be reliable and inexpensive. Findings – Study revealed that the most commonly used wear measurement techniques such as weight loss, roughness changes and change in particle count were not adequate for measuring small wear quantities in journal bearings. Out-of-roundness method with some refinements was found to be one of the most reliable methods for measuring small wear quantities in journal bearings working in hydrodynamic lubrication regime. By enlarging the out-of-roundness traces and determining the worn area of the bearing cross-section, weight loss in bearings was calculated, which was repeatable and reliable. Research limitations/implications – This research is a basic in nature where a rudimentary solution has been developed for measuring small wear quantities in rotary devices such as journal bearings. The method requires enlarging traces on a photocopier and determining the shape of the worn area on an out-of-roundness trace on a transparency, which is a simple but a crude method. This may require an automated procedure to determine the weight loss from the out-of-roundness traces directly. This method can be very useful in reducing test duration and measuring wear quantities with higher precision in situations where wear quantities are very small. Practical implications – This research provides a reliable method of measuring wear of circular geometry. The Talyrond equipment used for measuring the change in out-of-roundness due to wear of bearings indicates that this equipment has high potential to be used as a wear measuring device also. Measurement of weight loss from the traces is an enhanced capability of this equipment and this research may lead to the development of a modified version of Talyrond type of equipment for wear measurements in circular machine components. Originality/value – Wear measurement in hydrodynamic bearings requires long duration tests to achieve adequate wear quantities. Out-of-roundness is one of the geometrical parameters that changes with progression of wear in a circular shape components. Thus, out-of-roundness is found to be an effective wear measuring parameter that relates to change in geometry. Method of increasing the sensitivity and enlargement of out-of-roundness traces is original work through which area of worn cross-section can be determined and weight loss can be derived for materials of known density with higher precision.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pattern recognition is a promising approach for the identification of structural damage using measured dynamic data. Much of the research on pattern recognition has employed artificial neural networks (ANNs) and genetic algorithms as systematic ways of matching pattern features. The selection of a damage-sensitive and noise-insensitive pattern feature is important for all structural damage identification methods. Accordingly, a neural networks-based damage detection method using frequency response function (FRF) data is presented in this paper. This method can effectively consider uncertainties of measured data from which training patterns are generated. The proposed method reduces the dimension of the initial FRF data and transforms it into new damage indices and employs an ANN method for the actual damage localization and quantification using recognized damage patterns from the algorithm. In civil engineering applications, the measurement of dynamic response under field conditions always contains noise components from environmental factors. In order to evaluate the performance of the proposed strategy with noise polluted data, noise contaminated measurements are also introduced to the proposed algorithm. ANNs with optimal architecture give minimum training and testing errors and provide precise damage detection results. In order to maximize damage detection results, the optimal architecture of ANN is identified by defining the number of hidden layers and the number of neurons per hidden layer by a trial and error method. In real testing, the number of measurement points and the measurement locations to obtain the structure response are critical for damage detection. Therefore, optimal sensor placement to improve damage identification is also investigated herein. A finite element model of a two storey framed structure is used to train the neural network. It shows accurate performance and gives low error with simulated and noise-contaminated data for single and multiple damage cases. As a result, the proposed method can be used for structural health monitoring and damage detection, particularly for cases where the measurement data is very large. Furthermore, it is suggested that an optimal ANN architecture can detect damage occurrence with good accuracy and can provide damage quantification with reasonable accuracy under varying levels of damage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

People get into healthcare because they want to help society. And when a new hospital is briefed, everyone tries to do their best, but the process is mired by the impossibility of the task. Stakeholders rarely understand the architectural process, nobody can predict the future, and the only thing for certain is that everything will change as the project unfolds, revealing errors in initial assumptions and calculations, shifts in needs, new technologies etc. Yet there’s always pressure to keep to the programme and to press on regardless. This chaos leads eventually to suboptimal results: hospitals the world over are riddled with inefficiencies, idiosyncrasies, incredible wastage and features that lead to poor clinical outcomes. This talk will sketch out the basics of Scrum, the most popular open-source Lean/Agile methodology. It will discuss what healthcare designers can learn from the geeks in Silicon Valley reduce risk, meet deadlines and deliver the highest possible value for the budget despite the uncertainty.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Paediatric onset inflammatory bowel disease (IBD) may cause alterations in energy requirements and invalidate the use of standard prediction equations. Our aim was to evaluate four commonly used prediction equations for resting energy expenditure (REE) in children with IBD. Methods: Sixty-three children had repeated measurements of REE as part of a longitudinal research study yielding a total of 243 measurements. These were compared with predicted REE from Schofield, Oxford, FAO/WHO/UNU, and Harris-Benedict equations using the Bland-Altman method. Results: Mean (±SD) age of the patients was 14.2 (2.4) years. Mean measured REE was 1566 (336) kcal per day compared with 1491 (236), 1441 (255), 1481 (232), and 1435 (212) kcal per day calculated from Schofield, Oxford, FAO/WHO/UNU, and Harris-Benedict, respectively. While the Schofield equation demonstrated the least difference between measured and predicted REE, it, along with the other equations tested, did not perform uniformly across all subjects, indicating greater errors at either end of the spectrum of energy expenditure. Smaller differences were found for all prediction equations for Crohn's disease compared with ulcerative colitis. Conclusions: Of the commonly used equations, the equation of Schofield should be used in pediatric patients with IBD when measured values are not able to be obtained. (Inflamm Bowel Dis 2010;) Copyright © 2010 Crohn's & Colitis Foundation of America, Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This chapter considers the role of the law in communicating patient safety. Downie, Lahey, Ford, et al’s (2006) preventing, knowing and responding theoretical framework is adopted to classify the different elements of patient safety law. Rather than setting out all relevant patient safety laws in detail, this chapter highlights key legal strategies which are employed to: prevent the occurrence of patient safety incidents (preventing); support the discovery and open discussion of patient safety incidents when they do occur (knowing),; and guide responses after they occur (responding) (Downie, Lahey, Ford, et al 2006). The law is increasingly being invoked to facilitate open discussion of and communication surrounding patient safety. After highlighting some legal strategies used to communicate patient safety, two practice examples are presented. The practice examples highlight different aspects of patient safety law and are indicative of communication issues commonly faced in practice. The first practice example focuses on the role of the Ccoroner in communicating patient safety. This example highlights the investigative role of the law in relation to patient safety (knowing). It also showcases the preventing responding and preventing elements in respect of the significant number of communication errors that can occur in a multi-disciplinary, networked health system. The main focus of the second practice example is responding example illustrates how the law responds to health service providers’ and professionals’ miscommunication (and subsequent incidents) during treatment, however it also touches upon knowing and preventing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Taylor coefficients c and d of the EM form factor of the pion are constrained using analyticity, knowledge of the phase of the form factor in the time-like region, 4m(pi)(2) <= t <= t(in) and its value at one space-like point, using as input the (g - 2) of the muon. This is achieved using the technique of Lagrange multipliers, which gives a transparent expression for the corresponding bounds. We present a detailed study of the sensitivity of the bounds to the choice of time-like phase and errors present in the space-like data, taken from recent experiments. We find that our results constrain c stringently. We compare our results with those in the literature and find agreement with the chiral perturbation-theory results for c. We obtain d similar to O(10) GeV-6 when c is set to the chiral perturbation-theory values.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many software applications extend their functionality by dynamically loading libraries into their allocated address space. However, shared libraries are also often of unknown provenance and quality and may contain accidental bugs or, in some cases, deliberately malicious code. Most sandboxing techniques which address these issues require recompilation of the libraries using custom tool chains, require significant modifications to the libraries, do not retain the benefits of single address-space programming, do not completely isolate guest code, or incur substantial performance overheads. In this paper we present LibVM, a sandboxing architecture for isolating libraries within a host application without requiring any modifications to the shared libraries themselves, while still retaining the benefits of a single address space and also introducing a system call inter-positioning layer that allows complete arbitration over a shared library’s functionality. We show how to utilize contemporary hardware virtualization support towards this end with reasonable performance overheads and, in the absence of such hardware support, our model can also be implemented using a software-based mechanism. We ensure that our implementation conforms as closely as possible to existing shared library manipulation functions, minimizing the amount of effort needed to apply such isolation to existing programs. Our experimental results show that it is easy to gain immediate benefits in scenarios where the goal is to guard the host application against unintentional programming errors when using shared libraries, as well as in more complex scenarios, where a shared library is suspected of being actively hostile. In both cases, no changes are required to the shared libraries themselves.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A detailed study of the solvation dynamics of a charged coumarin dye molecule in gamma-cyclodextrin/water has been carried out by using two different theoretical approaches. The first approach is based on a multishell continuum model (MSCM). This model predicts the time scales of the dynamics rather well, provided an accurate description of the frequency-dependent dielectric function is supplied. The reason for this rather surprising agreement is 2-fold. First, there is a cancellation of errors, second, the two-zone model mimics the heterogeneous microenvironment surrounding the ion rather well. The second approach is based on the molecular hydrodynamics theory (MI-IT). In this molecular approach, the solvation dynamics has been studied by restricting the translational motion of the solvent molecules enclosed within the cavity. The results from the molecular theory are also in good agreement with the experimental results. Our study indicates that, in the present case, the restricted environment affects only the long time decay of the solvation time correlation function. The short time dynamics is still governed by the librational (and/or vibrational) modes present in bulk water.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Melanopsin containing intrinsically photosensitive Retinal Ganglion cells (ipRGCs) mediate the pupil light reflex (PLR) during light onset and at light offset (the post-illumination pupil response, PIPR). Recent evidence shows that the PLR and PIPR can provide non-invasive, objective markers of age-related retinal and optic nerve disease, however there is no consensus on the effects of healthy ageing or refractive error on the ipRGC mediated pupil function. Here we isolated melanopsin contributions to the pupil control pathway in 59 human participants with no ocular pathology across a range of ages and refractive errors. We show that there is no effect of age or refractive error on ipRGC inputs to the human pupil control pathway. The stability of the ipRGC mediated pupil response across the human lifespan provides a functional correlate of their robustness observed during ageing in rodent models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite great advances in very large scale integrated-circuit design and manufacturing, performance of even the best available high-speed, high-resolution analog-to-digital converter (ADC) is known to deteriorate while acquiring fast-rising, high-frequency, and nonrepetitive waveforms. Waveform digitizers (ADCs) used in high-voltage impulse recordings and measurements are invariably subjected to such waveforms. Errors resulting from a lowered ADC performance can be unacceptably high, especially when higher accuracies have to be achieved (e.g., when part of a reference measuring system). Static and dynamic nonlinearities (estimated independently) are vital indices for evaluating performance and suitability of ADCs to be used in such environments. Typically, the estimation of static nonlinearity involves 10-12 h of time or more (for a 12-b ADC) and the acquisition of millions of samples at high input frequencies for dynamic characterization. ADCs with even higher resolution and faster sampling speeds will soon become available. So, there is a need to reduce testing time for evaluating these parameters. This paper proposes a novel and time-efficient method for the simultaneous estimation of static and dynamic nonlinearity from a single test. This is achieved by conceiving a test signal, comprised of a high-frequency sinusoid (which addresses dynamic assessment) modulated by a low-frequency ramp (relevant to the static part). Details of implementation and results on two digitizers are presented and compared with nonlinearities determined by the existing standardized approaches. Good agreement in results and time savings achievable indicates its suitability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents on overview of the issues in precisely defining, specifying and evaluating the dependability of software, particularly in the context of computer controlled process systems. Dependability is intended to be a generic term embodying various quality factors and is useful for both software and hardware. While the developments in quality assurance and reliability theories have proceeded mostly in independent directions for hardware and software systems, we present here the case for developing a unified framework of dependability—a facet of operational effectiveness of modern technological systems, and develop a hierarchical systems model helpful in clarifying this view. In the second half of the paper, we survey the models and methods available for measuring and improving software reliability. The nature of software “bugs”, the failure history of the software system in the various phases of its lifecycle, the reliability growth in the development phase, estimation of the number of errors remaining in the operational phase, and the complexity of the debugging process have all been considered to varying degrees of detail. We also discuss the notion of software fault-tolerance, methods of achieving the same, and the status of other measures of software dependability such as maintainability, availability and safety.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs vs. 0.18 μs standard deviation), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The export of sediments from coastal catchments can have detrimental impacts on estuaries and near shore reef ecosystems such as the Great Barrier Reef. Catchment management approaches aimed at reducing sediment loads require monitoring to evaluate their effectiveness in reducing loads over time. However, load estimation is not a trivial task due to the complex behaviour of constituents in natural streams, the variability of water flows and often a limited amount of data. Regression is commonly used for load estimation and provides a fundamental tool for trend estimation by standardising the other time specific covariates such as flow. This study investigates whether load estimates and resultant power to detect trends can be enhanced by (i) modelling the error structure so that temporal correlation can be better quantified, (ii) making use of predictive variables, and (iii) by identifying an efficient and feasible sampling strategy that may be used to reduce sampling error. To achieve this, we propose a new regression model that includes an innovative compounding errors model structure and uses two additional predictive variables (average discounted flow and turbidity). By combining this modelling approach with a new, regularly optimised, sampling strategy, which adds uniformity to the event sampling strategy, the predictive power was increased to 90%. Using the enhanced regression model proposed here, it was possible to detect a trend of 20% over 20 years. This result is in stark contrast to previous conclusions presented in the literature. (C) 2014 Elsevier B.V. All rights reserved.