941 resultados para Biodosimetry errors
Resumo:
The greatest effect on reducing mortality in breast cancer comes from the detection and treatment of invasive cancer when it is as small as possible. Although mammography screening is known to be effective, observer errors are frequent and false-negative cancers can be found in retrospective studies of prior mammograms. In the year 2001, 67 women with 69 surgically proven cancers detected at screening in the Mammography Centre of Helsinki University Hospital had previous mammograms as well. These mammograms were analyzed by an experienced screening radiologist, who found that 36 lesions were already visible in previous screening rounds. CAD (Second Look v. 4.01) detected 23 of these missed lesions. Eight readers with different kinds of experience with mammography screening read the films of 200 women with and without CAD. These films included 35 of those missed lesions and 16 screen-detected cancers. CAD sensitivity was 70.6% and specificity 15.8%. Use of CAD lengthened the mean time spent for readings but did not significantly affect readers sensitivities or specificities. Therefore the use of applied version of CAD (Second Look v. 4.01) is questionable. Because none of those eight readers found exactly same cancers, two reading methods were compared: summarized independent reading (at least a single cancer-positive opinion within the group considered decisive) and conference consensus reading (the cancer-positive opinion of the reader majority was considered decisive). The greatest sensitivity of 74.5% was achieved when the independent readings of 4 best-performing readers were summarized. Overall the summarized independent readings were more sensitive than conference consensus readings (64.7% vs. 43.1%) while there was far less difference in mean specificities (92.4% vs. 97.7%). After detecting suspicious lesion, the radiologist has to decide what is the most accurate, fast, and cost-effective means of further work-up. The feasibility of FNAC and CNB in the diagnosis of breast lesions was compared in non-randomised, retrospective study of 580 (503 malignant) breast lesions of 572 patients. The absolute sensitivity for CNB was better than for FNAC, 96% (206/214) vs. 67% (194/289) (p < 0.0001). An additional needle biopsy or surgical biopsy was performed for 93 and 62 patients with FNAC, but for only 2 and 33 patients with CNB. The frequent need of supplement biopsies and unnecessary axillary operations due to false-positive findings made FNAC (294 ) more expensive than CNB (223 ), and because the advantage of quick analysis vanishes during the overall diagnostic and referral process, it is recommendable to use CNB as initial biopsy method.
Resumo:
A new form of a multi-step transversal linearization (MTL) method is developed and numerically explored in this study for a numeric-analytical integration of non-linear dynamical systems under deterministic excitations. As with other transversal linearization methods, the present version also requires that the linearized solution manifold transversally intersects the non-linear solution manifold at a chosen set of points or cross-section in the state space. However, a major point of departure of the present method is that it has the flexibility of treating non-linear damping and stiffness terms of the original system as damping and stiffness terms in the transversally linearized system, even though these linearized terms become explicit functions of time. From this perspective, the present development is closely related to the popular practice of tangent-space linearization adopted in finite element (FE) based solutions of non-linear problems in structural dynamics. The only difference is that the MTL method would require construction of transversal system matrices in lieu of the tangent system matrices needed within an FE framework. The resulting time-varying linearized system matrix is then treated as a Lie element using Magnus’ characterization [W. Magnus, On the exponential solution of differential equations for a linear operator, Commun. Pure Appl. Math., VII (1954) 649–673] and the associated fundamental solution matrix (FSM) is obtained through repeated Lie-bracket operations (or nested commutators). An advantage of this approach is that the underlying exponential transformation could preserve certain intrinsic structural properties of the solution of the non-linear problem. Yet another advantage of the transversal linearization lies in the non-unique representation of the linearized vector field – an aspect that has been specifically exploited in this study to enhance the spectral stability of the proposed family of methods and thus contain the temporal propagation of local errors. A simple analysis of the formal orders of accuracy is provided within a finite dimensional framework. Only a limited numerical exploration of the method is presently provided for a couple of popularly known non-linear oscillators, viz. a hardening Duffing oscillator, which has a non-linear stiffness term, and the van der Pol oscillator, which is self-excited and has a non-linear damping term.
Resumo:
We consider a suspended elastic rod under longitudinal compression. The compression can be used to adjust potential energy for transverse displacements from the harmonic to the double well regime. The two minima in potential energy curve describe two possible buckled states. Using transition state theory (TST) we have calculated the rate of conversion from one state to other. If the strain epsilon = 4 epsilon c the simple TST rate diverges. We suggest a method to correct this divergence for quantum calculations. We also find that zero point energy contributions can be quite large so that single mode calculations can lead to large errors in the rate.
Resumo:
[1] We have compared the spectral aerosol optical depth (AOD, tau lambda) and aerosol fine mode fraction (AFMF) of Collection 004 (C004) derived from Moderate-Resolution Imaging Spectroradiometer (MODIS) on board National Aeronautics and Space Administration's (NASA) Terra and Aqua platforms with that obtained from Aerosol Robotic Network (AERONET) at Kanpur (26.45 degrees N, 80.35 degrees E), India for the period 2001-2005. The spatially-averaged (0.5 degrees x 0.5 degrees centered at AERONET sunphotometer) MODIS Level-2 aerosol parameters (10 km at nadir) were compared with the temporally averaged AERONET-measured AOD (within +/- 30 minutes of MODIS overpass). We found that MODIS systematically overestimated AOD during the pre-monsoon season (March to June, known to be influenced by dust aerosols). The errors in AOD at 0.66 mu m were correlated with the apparent reflectance at 2.1 mu m (rho*(2.1)) which MODIS C004 uses to estimate the surface reflectance in the visible channels (rho(0.47) = rho*(2.1)/ 4, rho(0.66) = rho*(2.1)/ 2). The large errors in AOD (Delta tau(0.66) > 0.3) are found to be associated with the higher values of rho*(2.1) (0.18 to 0.25), where the uncertainty in the ratios of reflectance is large (Delta rho(0.66) +/- 0.04, Delta rho(0.47) +/- 0.02). This could have resulted in lower surface reflectance, higher aerosol path radiance and thus lead to overestimation in AOD. While MODIS-derived AFMF has binary distribution (1 or 0) with too low (AFMF < 0.2) during dust-loading period, and similar to 1 for the rest of the retrievals, AERONET showed range of values (0.4 to 0.9). The errors in tau(0.66) were also high in the scattering angle range 110 degrees - 140 degrees, where the optical effects of nonspherical dust particles are different from that of spherical particles.
Resumo:
With technology scaling, vulnerability to soft errors in random logic is increasing. There is a need for on-line error detection and protection for logic gates even at sea level. The error checker is the key element for an on-line detection mechanism. We compare three different checkers for error detection from the point of view of area, power and false error detection rates. We find that the double sampling checker (used in Razor), is the simplest and most area and power efficient, but suffers from very high false detection rates of 1.15 times the actual error rates. We also find that the alternate approaches of triple sampling and integrate and sample method (I&S) can be designed to have zero false detection rates, but at an increased area, power and implementation complexity. The triple sampling method has about 1.74 times the area and twice the power as compared to the Double Sampling method and also needs a complex clock generation scheme. The I&S method needs about 16% more power with 0.58 times the area as double sampling, but comes with more stringent implementation constraints as it requires detection of small voltage swings.
Resumo:
Purpose: To examine the effects of gaze position and optical blur, similar to that used in multifocal corrections, on stepping accuracy for a precision stepping task among older adults. Methods: Nineteen healthy older adults (mean age, 71.6 +/- 8.8 years) with normal vision performed a series of precision stepping tasks onto a fixed target. The stepping tasks were performed using a repeated-measures design for three gaze positions (fixating on the stepping target as well as 30 and 60 cm farther forward of the stepping target) and two visual conditions (best-corrected vision and with +2.50DS blur). Participants' gaze position was tracked using a head-mounted eye tracker. Absolute, anteroposterior, and mediolateral foot placement errors and within-subject foot placement variability were calculated from the locations of foot and floor-mounted retroreflective markers captured by flash photography of the final foot position. Results: Participants made significantly larger absolute and anteroposterior foot placement errors and exhibited greater foot placement variability when their gaze was directed farther forward of the stepping target. Blur led to significantly increased absolute and anteroposterior foot placement errors and increased foot placement variability. Furthermore, blur differentially increased the absolute and anteroposterior foot placement errors and variability when gaze was directed 60 cm farther forward of the stepping target. Conclusions: Increasing gaze position farther ahead from stepping locations and the presence of blur negatively impact the stepping accuracy of older adults. These findings indicate that blur, similar to that used in multifocal corrections, has the potential to increase the risk of trips and falls among older populations when negotiating challenging environments where precision stepping is required, particularly as gaze is directed farther ahead from stepping locations when walking.
Resumo:
A Finite Element Method based forward solver is developed for solving the forward problem of a 2D-Electrical Impedance Tomography. The Method of Weighted Residual technique with a Galerkin approach is used for the FEM formulation of EIT forward problem. The algorithm is written in MatLAB7.0 and the forward problem is studied with a practical biological phantom developed. EIT governing equation is numerically solved to calculate the surface potentials at the phantom boundary for a uniform conductivity. An EIT-phantom is developed with an array of 16 electrodes placed on the inner surface of the phantom tank filled with KCl solution. A sinusoidal current is injected through the current electrodes and the differential potentials across the voltage electrodes are measured. Measured data is compared with the differential potential calculated for known current and solution conductivity. Comparing measured voltage with the calculated data it is attempted to find the sources of errors to improve data quality for better image reconstruction.
Resumo:
Recently Li and Xia have proposed a transmission scheme for wireless relay networks based on the Alamouti space time code and orthogonal frequency division multiplexing to combat the effect of timing errors at the relay nodes. This transmission scheme is amazingly simple and achieves a diversity order of two for any number of relays. Motivated by its simplicity, this scheme is extended to a more general transmission scheme that can achieve full cooperative diversity for any number of relays. The conditions on the distributed space time block code (DSTBC) structure that admit its application in the proposed transmission scheme are identified and it is pointed out that the recently proposed full diversity four group decodable DST-BCs from precoded co-ordinate interleaved orthogonal designs and extended Clifford algebras satisfy these conditions. It is then shown how differential encoding at the source can be combined with the proposed transmission scheme to arrive at a new transmission scheme that can achieve full cooperative diversity in asynchronous wireless relay networks with no channel information and also no timing error knowledge at the destination node. Finally, four group decodable distributed differential space time block codes applicable in this new transmission scheme for power of two number of relays are also provided.
Resumo:
The electrical conduction in insulating materials is a complex process and several theories have been suggested in the literature. Many phenomenological empirical models are in use in the DC cable literature. However, the impact of using different models for cable insulation has not been investigated until now, but for the claims of relative accuracy. The steady state electric field in the DC cable insulation is known to be a strong function of DC conductivity. The DC conductivity, in turn, is a complex function of electric field and temperature. As a result, under certain conditions, the stress at cable screen is higher than that at the conductor boundary. The paper presents detailed investigations on using different empirical conductivity models suggested in the literature for HV DC cable applications. It has been expressly shown that certain models give rise to erroneous results in electric field and temperature computations. It is pointed out that the use of these models in the design or evaluation of cables will lead to errors.
Resumo:
Numerical weather prediction (NWP) models provide the basis for weather forecasting by simulating the evolution of the atmospheric state. A good forecast requires that the initial state of the atmosphere is known accurately, and that the NWP model is a realistic representation of the atmosphere. Data assimilation methods are used to produce initial conditions for NWP models. The NWP model background field, typically a short-range forecast, is updated with observations in a statistically optimal way. The objective in this thesis has been to develope methods in order to allow data assimilation of Doppler radar radial wind observations. The work has been carried out in the High Resolution Limited Area Model (HIRLAM) 3-dimensional variational data assimilation framework. Observation modelling is a key element in exploiting indirect observations of the model variables. In the radar radial wind observation modelling, the vertical model wind profile is interpolated to the observation location, and the projection of the model wind vector on the radar pulse path is calculated. The vertical broadening of the radar pulse volume, and the bending of the radar pulse path due to atmospheric conditions are taken into account. Radar radial wind observations are modelled within observation errors which consist of instrumental, modelling, and representativeness errors. Systematic and random modelling errors can be minimized by accurate observation modelling. The impact of the random part of the instrumental and representativeness errors can be decreased by calculating spatial averages from the raw observations. Model experiments indicate that the spatial averaging clearly improves the fit of the radial wind observations to the model in terms of observation minus model background (OmB) standard deviation. Monitoring the quality of the observations is an important aspect, especially when a new observation type is introduced into a data assimilation system. Calculating the bias for radial wind observations in a conventional way can result in zero even in case there are systematic differences in the wind speed and/or direction. A bias estimation method designed for this observation type is introduced in the thesis. Doppler radar radial wind observation modelling, together with the bias estimation method, enables the exploitation of the radial wind observations also for NWP model validation. The one-month model experiments performed with the HIRLAM model versions differing only in a surface stress parameterization detail indicate that the use of radar wind observations in NWP model validation is very beneficial.
Resumo:
In modern evolutionary divergence analysis the role of geological information extends beyond providing a timescale, to informing molecular rate variation across the tree. Here I consider the implications of this development. I use fossil calibrations to test the accuracy of models of molecular rate evolution for placental mammals, and reveal substantial misspecification associated with life history rate correlates. Adding further calibrations to reduce dating errors at specific nodes unfortunately tends to transfer underlying rate errors to adjacent branches. Thus, tight calibration across the tree is vital to buffer against rate model errors. I argue that this must include allowing maximum bounds to be tight when good fossil records permit, otherwise divergences deep in the tree will tend to be inflated by the interaction of rate errors and asymmetric confidence in minimum and maximum bounds. In the case of placental mammals I sought to reduce the potential for transferring calibration and rate model errors across the tree by focusing on well-supported calibrations with appropriately conservative maximum bounds. The resulting divergence estimates are younger than others published recently, and provide the long-anticipated molecular signature for the placental mammal radiation observed in the fossil record near the 66 Ma Cretaceous–Paleogene extinction event.
Resumo:
The aim of this study was to identify and describe the clinical reasoning characteristics of diagnostic experts. A group of 21 experienced general practitioners were asked to complete the Diagnostic Thinking Inventory (DTI) and a set of 10 clinical reasoning problems (CRPs) to evaluate their clinical reasoning. Both the DTI and the CRPs were scored, and the CRP response patterns of each GP examined in terms of the number and type of errors contained in them. Analysis of these data showed that six GPs were able to reach the correct diagnosis using significantly less clinical information than their colleagues. These GPs also made significantly fewer interpretation errors but scored lower on both the DTI and the CRPs. Additionally, this analysis showed that more than 20% of misdiagnoses occurred despite no errors being made in the identification and interpretation of relevant clinical information. These results indicate that these six GPs diagnose efficiently, effectively and accurately using relatively few clinical data and can therefore be classified as diagnostic experts. They also indicate that a major cause of misdiagnoses is failure to properly integrate clinical data. We suggest that increased emphasis on this step in the reasoning process should prove beneficial to the development of clinical reasoning skill in undergraduate medical students.
Resumo:
DNA amplification using Polymerase Chain Reaction (PCR) in a small volume is used in Lab-on-a-chip systems involving DNA manipulation. For few microliters of volume of liquid, it becomes difficult to measure and monitor the thermal profile accurately and reproducibly, which is an essential requirement for successful amplification. Conventional temperature sensors are either not biocompatible or too large and hence positioned away from the liquid leading to calibration errors. In this work we present a fluorescence based detection technique that is completely biocompatible and measures directly the liquid temperature. PCR is demonstrated in a 3 ILL silicon-glass microfabricated device using non-contact induction heating whose temperature is controlled using fluorescence feedback from SYBR green I dye molecules intercalated within sensor DNA. The performance is compared with temperature feedback using a thermocouple sensor. Melting curve followed by gel electrophoresis is used to confirm product specificity after the PCR cycles. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Radiation therapy (RT) plays currently significant role in curative treatments of several cancers. External beam RT is carried out mostly by using megavoltage beams of linear accelerators. Tumor eradication and normal tissue complications correlate to dose absorbed in tissues. Normally this dependence is steep and it is crucial that actual dose within patient accurately correspond to the planned dose. All factors in a RT procedure contain uncertainties requiring strict quality assurance. From hospital physicist´s point of a view, technical quality control (QC), dose calculations and methods for verification of correct treatment location are the most important subjects. Most important factor in technical QC is the verification that radiation production of an accelerator, called output, is within narrow acceptable limits. The output measurements are carried out according to a locally chosen dosimetric QC program defining measurement time interval and action levels. Dose calculation algorithms need to be configured for the accelerators by using measured beam data. The uncertainty of such data sets limits for best achievable calculation accuracy. All these dosimetric measurements require good experience, are workful, take up resources needed for treatments and are prone to several random and systematic sources of errors. Appropriate verification of treatment location is more important in intensity modulated radiation therapy (IMRT) than in conventional RT. This is due to steep dose gradients produced within or close to healthy tissues locating only a few millimetres from the targeted volume. The thesis was concentrated in investigation of the quality of dosimetric measurements, the efficacy of dosimetric QC programs, the verification of measured beam data and the effect of positional errors on the dose received by the major salivary glands in head and neck IMRT. A method was developed for the estimation of the effect of the use of different dosimetric QC programs on the overall uncertainty of dose. Data were provided to facilitate the choice of a sufficient QC program. The method takes into account local output stability and reproducibility of the dosimetric QC measurements. A method based on the model fitting of the results of the QC measurements was proposed for the estimation of both of these factors. The reduction of random measurement errors and optimization of QC procedure were also investigated. A method and suggestions were presented for these purposes. The accuracy of beam data was evaluated in Finnish RT centres. Sufficient accuracy level was estimated for the beam data. A method based on the use of reference beam data was developed for the QC of beam data. Dosimetric and geometric accuracy requirements were evaluated for head and neck IMRT when function of the major salivary glands is intended to be spared. These criteria are based on the dose response obtained for the glands. Random measurement errors could be reduced enabling lowering of action levels and prolongation of measurement time interval from 1 month to even 6 months simultaneously maintaining dose accuracy. The combined effect of the proposed methods, suggestions and criteria was found to facilitate the avoidance of maximal dose errors of up to even about 8 %. In addition, their use may make the strictest recommended overall dose accuracy level of 3 % (1SD) achievable.
Resumo:
Polar Regions are an energy sink of the Earth system, as the Sun rays do not reach the Poles for half of the year, and hit them only at very low angles for the other half of the year. In summer, solar radiation is the dominant energy source for the Polar areas, therefore even small changes in the surface albedo strongly affect the surface energy balance and, thus, the speed and amount of snow and ice melting. In winter, the main heat sources for the atmosphere are the cyclones approaching from lower latitudes, and the atmosphere-surface heat transfer takes place through turbulent mixing and longwave radiation, the latter dominated by clouds. The aim of this thesis is to improve the knowledge about the surface and atmospheric processes that control the surface energy budget over snow and ice, with particular focus on albedo during the spring and summer seasons, on horizontal advection of heat, cloud longwave forcing, and turbulent mixing during the winter season. The critical importance of a correct albedo representation in models is illustrated through the analysis of the causes for the errors in the surface and near-surface air temperature produced in a short-range numerical weather forecast by the HIRLAM model. Then, the daily and seasonal variability of snow and ice albedo have been examined by analysing field measurements of albedo, carried out in different environments. On the basis of the data analysis, simple albedo parameterizations have been derived, which can be implemented into thermodynamic sea ice models, as well as numerical weather prediction and climate models. Field measurements of radiation and turbulent fluxes over the Bay of Bothnia (Baltic Sea) also allowed examining the impact of a large albedo change during the melting season on surface energy and ice mass budgets. When high contrasts in surface albedo are present, as in the case of snow covered areas next to open water, the effect of the surface albedo heterogeneity on the downwelling solar irradiance under overcast condition is very significant, although it is usually not accounted for in single column radiative transfer calculations. To account for this effect, an effective albedo parameterization based on three-dimensional Monte Carlo radiative transfer calculations has been developed. To test a potentially relevant application of the effective albedo parameterization, its performance in the ground-based retrieval of cloud optical depth was illustrated. Finally, the factors causing the large variations of the surface and near-surface temperatures over the Central Arctic during winter were examined. The relative importance of cloud radiative forcing, turbulent mixing, and lateral heat advection on the Arctic surface temperature were quantified through the analysis of direct observations from Russian drifting ice stations, with the lateral heat advection calculated from reanalysis products.