925 resultados para Weighted histogram analysis method
Resumo:
Proton therapy is growing increasingly popular due to its superior dose characteristics compared to conventional photon therapy. Protons travel a finite range in the patient body and stop, thereby delivering no dose beyond their range. However, because the range of a proton beam is heavily dependent on the tissue density along its beam path, uncertainties in patient setup position and inherent range calculation can degrade thedose distribution significantly. Despite these challenges that are unique to proton therapy, current management of the uncertainties during treatment planning of proton therapy has been similar to that of conventional photon therapy. The goal of this dissertation research was to develop a treatment planning method and a planevaluation method that address proton-specific issues regarding setup and range uncertainties. Treatment plan designing method adapted to proton therapy: Currently, for proton therapy using a scanning beam delivery system, setup uncertainties are largely accounted for by geometrically expanding a clinical target volume (CTV) to a planning target volume (PTV). However, a PTV alone cannot adequately account for range uncertainties coupled to misaligned patient anatomy in the beam path since it does not account for the change in tissue density. In order to remedy this problem, we proposed a beam-specific PTV (bsPTV) that accounts for the change in tissue density along the beam path due to the uncertainties. Our proposed method was successfully implemented, and its superiority over the conventional PTV was shown through a controlled experiment.. Furthermore, we have shown that the bsPTV concept can be incorporated into beam angle optimization for better target coverage and normal tissue sparing for a selected lung cancer patient. Treatment plan evaluation method adapted to proton therapy: The dose-volume histogram of the clinical target volume (CTV) or any other volumes of interest at the time of planning does not represent the most probable dosimetric outcome of a given plan as it does not include the uncertainties mentioned earlier. Currently, the PTV is used as a surrogate of the CTV’s worst case scenario for target dose estimation. However, because proton dose distributions are subject to change under these uncertainties, the validity of the PTV analysis method is questionable. In order to remedy this problem, we proposed the use of statistical parameters to quantify uncertainties on both the dose-volume histogram and dose distribution directly. The robust plan analysis tool was successfully implemented to compute both the expectation value and its standard deviation of dosimetric parameters of a treatment plan under the uncertainties. For 15 lung cancer patients, the proposed method was used to quantify the dosimetric difference between the nominal situation and its expected value under the uncertainties.
Resumo:
Background: Several meta-analysis methods can be used to quantitatively combine the results of a group of experiments, including the weighted mean difference, statistical vote counting, the parametric response ratio and the non-parametric response ratio. The software engineering community has focused on the weighted mean difference method. However, other meta-analysis methods have distinct strengths, such as being able to be used when variances are not reported. There are as yet no guidelines to indicate which method is best for use in each case. Aim: Compile a set of rules that SE researchers can use to ascertain which aggregation method is best for use in the synthesis phase of a systematic review. Method: Monte Carlo simulation varying the number of experiments in the meta analyses, the number of subjects that they include, their variance and effect size. We empirically calculated the reliability and statistical power in each case Results: WMD is generally reliable if the variance is low, whereas its power depends on the effect size and number of subjects per meta-analysis; the reliability of RR is generally unaffected by changes in variance, but it does require more subjects than WMD to be powerful; NPRR is the most reliable method, but it is not very powerful; SVC behaves well when the effect size is moderate, but is less reliable with other effect sizes. Detailed tables of results are annexed. Conclusions: Before undertaking statistical aggregation in software engineering, it is worthwhile checking whether there is any appreciable difference in the reliability and power of the methods. If there is, software engineers should select the method that optimizes both parameters.
Resumo:
Ancient starch analysis is a microbotanical method in which starch granules are extracted from archaeological residues and the botanical source is identified. The method is an important addition to established palaeoethnobotanical research, as it can reveal ancient microremains of starchy staples such as cereal grains and seeds. In addition, starch analysis can detect starch originating from underground storage organs, which are rarely discovered using other methods. Because starch is tolerant of acidic soils, unlike most organic matter, starch analysis can be successful in northern boreal regions. Starch analysis has potential in the study of cultivation, plant domestication, wild plant usage and tool function, as well as in locating activity areas at sites and discovering human impact on the environment. The aim of this study was to experiment with the starch analysis method in Finnish and Estonian archaeology by building a starch reference collection from cultivated and native plant species, by developing sampling, measuring and analysis protocols, by extracting starch residues from archaeological artefacts and soils, and by identifying their origin. The purpose of this experiment was to evaluate the suitability of the method for the study of subsistence strategies in prehistoric Finland and Estonia. A total of 64 archaeological samples were analysed from four Late Neolithic sites in Finland and Estonia, with radiocarbon dates ranging between 2904 calBC and 1770 calBC. The samples yielded starch granules, which were compared with the starch reference collection and descriptions in the literature. Cereal-type starch was identified from the Finnish Kiukainen culture site and from the Estonian Corded Ware site. The samples from the Finnish Corded Ware site yielded underground storage organ starch, which may be the first evidence of the use of rhizomes as food in Finland. No cereal-type starch was observed. Although the sample sets were limited, the experiment confirmed that starch granules have been preserved well in the archaeological material of Finland and Estonia, and that differences between subsistence patterns, as well as evidence of cultivation and wild plant gathering, can be discovered using starch analysis. By collecting large sample sets and addressing the three most important issues – preventing contamination, collecting adequate references and understanding taphonomic processes – starch analysis can substantially contribute to research on ancient subsistence in Finland and Estonia.
Resumo:
The purpose of the present study was to assess the association between overbite and craniofacial growth pattern. The sample comprised eighty-six cephalograms obtained during the orthodontic pretreatment phase and analyzed using the Radiocef program to identify the craniofacial landmarks and perform orthodontic measurements. The variables utilized were overbite, the Jarabak percentage and the Vert index, as well as classifications resulting from the interpretation of these measurements. In all the statistical tests, a significance level of 5% was considered. Measurement reliability was checked by calculating method error. Weighted Kappa analysis showed that agreement between the facial types defined by the Vert index and the direction of growth trend established by the Jarabak percentage was not satisfactory. Owing to this lack of equivalency, a potential association between overbite and craniofacial growth pattern was evaluated using the chi-square test, considering the two methods separately. No relationship of dependence between overbite and craniofacial growth pattern was revealed by the results obtained. Therefore, it can be concluded that the classification of facial growth pattern will not be the same when considering the Jarabak and the Ricketts anayses, and that increased overbite cannot be associated with a braquifacial growth pattern, nor can openbite be associated with a dolichofacial growth pattern.
Resumo:
The traditional methods employed to detect atherosclerotic lesions allow for the identification of lesions; however, they do not provide specific characterization of the lesion`s biochemistry. Currently, Raman spectroscopy techniques are widely used as a characterization method for unknown substances, which makes this technique very important for detecting atherosclerotic lesions. The spectral interpretation is based on the analysis of frequency peaks present in the signal; however, spectra obtained from the same substance can show peaks slightly different and these differences make difficult the creation of an automatic method for spectral signal analysis. This paper presents a signal analysis method based on a clustering technique that allows for the classification of spectra as well as the inference of a diagnosis about the arterial wall condition. The objective is to develop a computational tool that is able to create clusters of spectra according to the arterial wall state and, after data collection, to allow for the classification of a specific spectrum into its correct cluster.
Resumo:
An increasing number of studies shows that the glycogen-accumulating organisms (GAOs) can survive and may indeed proliferate under the alternating anaerobic/aerobic conditions found in EBPR systems, thus forming a strong competitor of the polyphosphate-accumulating organisms (PAOs). Understanding their behaviors in a mixed PAO and GAO culture under various operational conditions is essential for developing operating strategies that disadvantage the growth of this group of unwanted organisms. A model-based data analysis method is developed in this paper for the study of the anaerobic PAO and GAO activities in a mixed PAO and GAO culture. The method primarily makes use of the hydrogen ion production rate and the carbon dioxide transfer rate resulting from the acetate uptake processes by PAOs and GAOs, measured with a recently developed titration and off-gas analysis (TOGA) sensor. The method is demonstrated using the data from a laboratory-scale sequencing batch reactor (SBR) operated under alternating anaerobic and aerobic conditions. The data analysis using the proposed method strongly indicates a coexistence of PAOs and GAOs in the system, which was independently confirmed by fluorescent in situ hybridization (FISH) measurement. The model-based analysis also allowed the identification of the respective acetate uptake rates by PAOs and GAOs, along with a number of kinetic and stoichiometric parameters involved in the PAO and GAO models. The excellent fit between the model predictions and the experimental data not involved in parameter identification shows that the parameter values found are reliable and accurate. It also demonstrates that the current anaerobic PAO and GAO models are able to accurately characterize the PAO/GAO mixed culture obtained in this study. This is of major importance as no pure culture of either PAOs or GAOs has been reported to date, and hence the current PAO and GAO models were developed for the interpretation of experimental results of mixed cultures. The proposed method is readily applicable for detailed investigations of the competition between PAOs and GAOs in enriched cultures. However, the fermentation of organic substrates carried out by ordinary heterotrophs needs to be accounted for when the method is applied to the study of PAO and GAO competition in full-scale sludges. (C) 2003 Wiley Periodicals, Inc.
Resumo:
In this article we provide homotopy solutions of a cancer nonlinear model describing the dynamics of tumor cells in interaction with healthy and effector immune cells. We apply a semi-analytic technique for solving strongly nonlinear systems – the Step Homotopy Analysis Method (SHAM). This algorithm, based on a modification of the standard homotopy analysis method (HAM), allows to obtain a one-parameter family of explicit series solutions. By using the homotopy solutions, we first investigate the dynamical effect of the activation of the effector immune cells in the deterministic dynamics, showing that an increased activation makes the system to enter into chaotic dynamics via a period-doubling bifurcation scenario. Then, by adding demographic stochasticity into the homotopy solutions, we show, as a difference from the deterministic dynamics, that an increased activation of the immune cells facilitates cancer clearance involving tumor cells extinction and healthy cells persistence. Our results highlight the importance of therapies activating the effector immune cells at early stages of cancer progression.
Resumo:
PURPOSE: To determine the correlation between ocular blood flow velocities and ocular pulse amplitude (OPA) in glaucoma patients using colour Doppler imaging (CDI) waveform analysis. METHOD: A prospective, observer-masked, case-control study was performed. OPA and blood flow variables from central retinal artery and vein (CRA, CRV), nasal and temporal short posterior ciliary arteries (NPCA, TPCA) and ophthalmic artery (OA) were obtained through dynamic contour tonometry and CDI, respectively. Univariate and multiple regression analyses were performed to explore the correlations between OPA and retrobulbar CDI waveform and systemic cardiovascular parameters (blood pressure, blood pressure amplitude, mean ocular perfusion pressure and peripheral pulse). RESULTS: One hundred and ninety-two patients were included [healthy controls: 55; primary open-angle glaucoma (POAG): 74; normal-tension glaucoma (NTG): 63]. OPA was statistically different between groups (Healthy: 3.17 ± 1.2 mmHg; NTG: 2.58 ± 1.2 mmHg; POAG: 2.60 ± 1.1 mmHg; p < 0.01), but not between the glaucoma groups (p = 0.60). Multiple regression models to explain OPA variance were made for each cohort (healthy: p < 0.001, r = 0.605; NTG: p = 0.003, r = 0.372; POAG: p < 0.001, r = 0.412). OPA was independently associated with retrobulbar CDI parameters in the healthy subjects and POAG patients (healthy CRV resistance index: β = 3.37, CI: 0.16-6.59; healthy NPCA mean systolic/diastolic velocity ratio: β = 1.34, CI: 0.52-2.15; POAG TPCA mean systolic velocity: β = 0.14, CI 0.05-0.23). OPA in the NTG group was associated with diastolic blood pressure and pulse rate (β = -0.04, CI: -0.06 to -0.01; β = -0.04, CI: -0.06 to -0.001, respectively). CONCLUSIONS: Vascular-related models provide a better explanation to OPA variance in healthy individuals than in glaucoma patients. The variables that influence OPA seem to be different in healthy, POAG and NTG patients.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
This study deals with investigating the groundwater quality for irrigation purpose, the vulnerability of the aquifer system to pollution and also the aquifer potential for sustainable water resources development in Kobo Valley development project. The groundwater quality is evaluated up on predicting the best possible distribution of hydrogeochemicals using geostatistical method and comparing them with the water quality guidelines given for the purpose of irrigation. The hydro geochemical parameters considered are SAR, EC, TDS, Cl-, Na+, Ca++, SO4 2- and HCO3 -. The spatial variability map reveals that these parameters falls under safe, moderate and severe or increasing problems. In order to present it clearly, the aggregated Water Quality Index (WQI) map is constructed using Weighted Arithmetic Mean method. It is found that Kobo-Gerbi sub basin is suffered from bad water quality for the irrigation purpose. Waja Golesha sub-basin has moderate and Hormat Golena is the better sub basin in terms of water quality. The groundwater vulnerability assessment of the study area is made using the GOD rating system. It is found that the whole area is experiencing moderate to high risk of vulnerability and it is a good warning for proper management of the resource. The high risks of vulnerability are noticed in Hormat Golena and Waja Golesha sub basins. The aquifer potential of the study area is obtained using weighted overlay analysis and 73.3% of the total area is a good site for future water well development. The rest 26.7% of the area is not considered as a good site for spotting groundwater wells. Most of this area fall under Kobo-Gerbi sub basin.
Resumo:
The building sector is one of the Europeâ s main energy consumer, making buildings an important target for a wiser energy use, improving indoor comfort conditions and reducing the energy consumption. To achieve the European Union targets for energy consumption and carbon reductions it is crucial to act in new, but also in existing buildings, which constitute the majority of the building stock. In existing buildings, the significant improvement of their efficiency requires important investments. Therefore, costs are a major concern in the decision making process and the analysis of the cost effectiveness of the interventions is an important path in the guidance for the selection of the different renovation scenarios. The Portuguese thermal legislation considers the simple payback method for the calculations of the time for the return of the investment. However, this method does not take into consideration inflation, cash flows and cost of capital, as well as the future costs of energy and the building elements lifetime as it happens in a life cycle cost analysis. In order to understand the impact of the economic analysis method used in the choice of the renovation measures, a case study has been analysed using simple payback calculations and life cycle costs analysis. Overall results show that less far-reaching renovation measures are indicated when using the simple payback calculations which may be leading to solutions less cost-effective in a long run perspective.
Resumo:
The impending introduction of lead-free solder in the manufacture of electrical and electronic products has presented the electronics industry with many challenges. European manufacturers must transfer from a tin-lead process to a lead-free process by July 2006 as a result of the publication of two directives from the European Parliament. Tin-lead solders have been used for mechanical and electrical connections on printed circuit boards for over fifty years and considerable process knowledge has been accumulated. Extensive literature reviews were conducted on the topic and as a result it was found there are many implications to be considered with the introduction of lead-free solder. One particular question that requires answering is; can lead-free solder be used in existing manufacturing processes? The purpose of this research is to conduct a comparative study of a tin-lead solder and a lead-free solder in two key surface mount technology (SMT) processes. The two SMT processes in question were the stencil printing process and the reflow soldering process. Unreplicated fractional factorial experimental designs were used to carry out the studies. The quality of paste deposition in terms of height and volume were the characteristics of interest in the stencil printing process. The quality of solder joints produced in the reflow soldering experiment was assessed using x-ray and cross sectional analysis. This provided qualitative data that was then uniquely scored and weighted using a method developed during the research. Nested experimental design techniques were then used to analyse the resulting quantitative data. Predictive models were developed that allowed for the optimisation of both processes. Results from both experiments show that solder joints of comparable quality to those produced using tin-lead solder can be produced using lead-free solder in current SMT processes.
Resumo:
BACKGROUND: Radiation dose exposure is of particular concern in children due to the possible harmful effects of ionizing radiation. The adaptive statistical iterative reconstruction (ASIR) method is a promising new technique that reduces image noise and produces better overall image quality compared with routine-dose contrast-enhanced methods. OBJECTIVE: To assess the benefits of ASIR on the diagnostic image quality in paediatric cardiac CT examinations. MATERIALS AND METHODS: Four paediatric radiologists based at two major hospitals evaluated ten low-dose paediatric cardiac examinations (80 kVp, CTDI(vol) 4.8-7.9 mGy, DLP 37.1-178.9 mGy·cm). The average age of the cohort studied was 2.6 years (range 1 day to 7 years). Acquisitions were performed on a 64-MDCT scanner. All images were reconstructed at various ASIR percentages (0-100%). For each examination, radiologists scored 19 anatomical structures using the relative visual grading analysis method. To estimate the potential for dose reduction, acquisitions were also performed on a Catphan phantom and a paediatric phantom. RESULTS: The best image quality for all clinical images was obtained with 20% and 40% ASIR (p < 0.001) whereas with ASIR above 50%, image quality significantly decreased (p < 0.001). With 100% ASIR, a strong noise-free appearance of the structures reduced image conspicuity. A potential for dose reduction of about 36% is predicted for a 2- to 3-year-old child when using 40% ASIR rather than the standard filtered back-projection method. CONCLUSION: Reconstruction including 20% to 40% ASIR slightly improved the conspicuity of various paediatric cardiac structures in newborns and children with respect to conventional reconstruction (filtered back-projection) alone.
Resumo:
Achieving a high degree of dependability in complex macro-systems is challenging. Because of the large number of components and numerous independent teams involved, an overview of the global system performance is usually lacking to support both design and operation adequately. A functional failure mode, effects and criticality analysis (FMECA) approach is proposed to address the dependability optimisation of large and complex systems. The basic inductive model FMECA has been enriched to include considerations such as operational procedures, alarm systems. environmental and human factors, as well as operation in degraded mode. Its implementation on a commercial software tool allows an active linking between the functional layers of the system and facilitates data processing and retrieval, which enables to contribute actively to the system optimisation. The proposed methodology has been applied to optimise dependability in a railway signalling system. Signalling systems are typical example of large complex systems made of multiple hierarchical layers. The proposed approach appears appropriate to assess the global risk- and availability-level of the system as well as to identify its vulnerabilities. This enriched-FMECA approach enables to overcome some of the limitations and pitfalls previously reported with classical FMECA approaches.
Resumo:
Soil infiltration is a key link of the natural water cycle process. Studies on soil permeability are conducive for water resources assessment and estimation, runoff regulation and management, soil erosion modeling, nonpoint and point source pollution of farmland, among other aspects. The unequal influence of rainfall duration, rainfall intensity, antecedent soil moisture, vegetation cover, vegetation type, and slope gradient on soil cumulative infiltration was studied under simulated rainfall and different underlying surfaces. We established a six factor-model of soil cumulative infiltration by the improved back propagation (BP)-based artificial neural network algorithm with a momentum term and self-adjusting learning rate. Compared to the multiple nonlinear regression method, the stability and accuracy of the improved BP algorithm was better. Based on the improved BP model, the sensitive index of these six factors on soil cumulative infiltration was investigated. Secondly, the grey relational analysis method was used to individually study grey correlations among these six factors and soil cumulative infiltration. The results of the two methods were very similar. Rainfall duration was the most influential factor, followed by vegetation cover, vegetation type, rainfall intensity and antecedent soil moisture. The effect of slope gradient on soil cumulative infiltration was not significant.