987 resultados para Average method
Resumo:
PURPOSE: Effective cancer treatment generally requires combination therapy. The combination of external beam therapy (XRT) with radiopharmaceutical therapy (RPT) requires accurate three-dimensional dose calculations to avoid toxicity and evaluate efficacy. We have developed and tested a treatment planning method, using the patient-specific three-dimensional dosimetry package 3D-RD, for sequentially combined RPT/XRT therapy designed to limit toxicity to organs at risk. METHODS AND MATERIALS: The biologic effective dose (BED) was used to translate voxelized RPT absorbed dose (D(RPT)) values into a normalized total dose (or equivalent 2-Gy-fraction XRT absorbed dose), NTD(RPT) map. The BED was calculated numerically using an algorithmic approach, which enabled a more accurate calculation of BED and NTD(RPT). A treatment plan from the combined Samarium-153 and external beam was designed that would deliver a tumoricidal dose while delivering no more than 50 Gy of NTD(sum) to the spinal cord of a patient with a paraspinal tumor. RESULTS: The average voxel NTD(RPT) to tumor from RPT was 22.6 Gy (range, 1-85 Gy); the maximum spinal cord voxel NTD(RPT) from RPT was 6.8 Gy. The combined therapy NTD(sum) to tumor was 71.5 Gy (range, 40-135 Gy) for a maximum voxel spinal cord NTD(sum) equal to the maximum tolerated dose of 50 Gy. CONCLUSIONS: A method that enables real-time treatment planning of combined RPT-XRT has been developed. By implementing a more generalized conversion between the dose values from the two modalities and an activity-based treatment of partial volume effects, the reliability of combination therapy treatment planning has been expanded.
Resumo:
The objective of this work was to propose a way of using the Tocher's method of clustering to obtain a matrix similar to the cophenetic one obtained for hierarchical methods, which would allow the calculation of a cophenetic correlation. To illustrate the obtention of the proposed cophenetic matrix, we used two dissimilarity matrices - one obtained with the generalized squared Mahalanobis distance and the other with the Euclidean distance - between 17 garlic cultivars, based on six morphological characters. Basically, the proposal for obtaining the cophenetic matrix was to use the average distances within and between clusters, after performing the clustering. A function in R language was proposed to compute the cophenetic matrix for Tocher's method. The empirical distribution of this correlation coefficient was briefly studied. For both dissimilarity measures, the values of cophenetic correlation obtained for the Tocher's method were higher than those obtained with the hierarchical methods (Ward's algorithm and average linkage - UPGMA). Comparisons between the clustering made with the agglomerative hierarchical methods and with the Tocher's method can be performed using a criterion in common: the correlation between matrices of original and cophenetic distances.
Resumo:
A comparative performance analysis of four geolocation methods in terms of their theoretical root mean square positioning errors is provided. Comparison is established in two different ways: strict and average. In the strict type, methods are examined for a particular geometric configuration of base stations(BSs) with respect to mobile position, which determines a givennoise profile affecting the respective time-of-arrival (TOA) or timedifference-of-arrival (TDOA) estimates. In the average type, methodsare evaluated in terms of the expected covariance matrix ofthe position error over an ensemble of random geometries, so thatcomparison is geometry independent. Exact semianalytical equationsand associated lower bounds (depending solely on the noiseprofile) are obtained for the average covariance matrix of the positionerror in terms of the so-called information matrix specific toeach geolocation method. Statistical channel models inferred fromfield trials are used to define realistic prior probabilities for therandom geometries. A final evaluation provides extensive resultsrelating the expected position error to channel model parametersand the number of base stations.
Resumo:
A new method is described for the determination of the herbicide bispyribac-sodium in surface water, especially from river and irrigated rice water samples. The method involves extraction in solid phase and quantification by high performance liquid chromatography with diode array detection (HPLC-DAD). After optimization of the extraction and separation parameters, the method was validated. The method presented average recoveries of 101.3 and 97.7%, under repeatability and intermediate precision conditions, respectively, with adequate precision (RSD from 0.9 to 7.5%). The method was applied for the determination of bispyribac-sodium in surface water samples with a limit of detection of 0.1 μg L-1.
Resumo:
This study validated a high performance liquid chromatography (HPLC) method for the quantitative evaluation of quercetin in topical emulsions. The method was linear within 0.05 - 200 μg/mL range with a correlation coefficient of 0.9997, and without interference in the quercetin peak. The detection and quantitation limits were 18 and 29 ng/mL, respectively. The intra- and inter-assay precisions presented R.S.D. values lower than 2%. An average of 93% and 94% of quercetin was recovered for non-ionic and anionic emulsions, respectively. The raw material and anionic emulsion, but not non-ionic emulsion, were stable in all storage conditions for one year. The method reported is a fast and reliable HPLC technique useful for quercetin determination in topical emulsions.
Resumo:
The objective of this research was to develop and validate an alternative analytical method for quantitative determination of levofloxacin in tablets and injection preparations. The calibration curves were linear over a concentration range from 3.0 to 8.0 μg mL-1. The relative standard deviation was below 1.0% for both formulations and average recovery was 101.42 ± 0.45% and 100.34 ± 0.85% for tablets and injection formulations, respectively. The limit of detection and limit of quantitation were 0.08 and 0.25 μg mL-1, respectively. It was concluded that the developed method is suitable for the quality control of levofloxacin in pharmaceuticals formulations.
Resumo:
A LC-ESI-MS/MS method was developed and validated according to the European Union decision 2002/657/EC, for the determination of tetracyclines (TCs) in chicken-muscle since Europe is one of the main markets for Brazilian products. Linearity of r > 0.9979, limits of quantification in the range of 7.0-35.0 ng/g, average recoveries of 89.38 - 106.27%, within-day and between-day precision were adequate for all TCs. The decision limit and the detection capability were 93.00-106.46 ng/g and 95.84-114.38 ng/g, respectively. This method is suitable for application in surveillance programmes of residues of TCs in chicken-muscle samples.
Resumo:
In the present study, a reversed-phase high-performance liquid chromatographic (RP-HPLC) procedure was developed and validated for the simultaneous determination of seven water-soluble vitamins (thiamine, riboflavin, niacin, cyanocobalamin, ascorbic acid, folic acid, and p-aminobenzoic acid) and four fat-soluble vitamins (retinol acetate, cholecalciferol, α-tocopherol, and phytonadione) in multivitamin tablets. The linearity of the method was excellent (R² > 0.999) over the concentration range of 10 - 500 ng mL-1. The statistical evaluation of the method was carried out by performing the intra- and inter-day precision. The accuracy of the method was tested by measuring the average recovery; values ranged between 87.4% and 98.5% and were acceptable quantitative results that corresponded with the label claims.
Resumo:
This work is devoted to the analysis of signal variation of the Cross-Direction and Machine-Direction measurements from paper web. The data that we possess comes from the real paper machine. Goal of the work is to reconstruct the basis weight structure of the paper and to predict its behaviour to the future. The resulting synthetic data is needed for simulation of paper web. The main idea that we used for describing the basis weight variation in the Cross-Direction is Empirical Orthogonal Functions (EOF) algorithm, which is closely related to Principal Component Analysis (PCA) method. Signal forecasting in time is based on Time-Series analysis. Two principal mathematical procedures that we used in the work are Autoregressive-Moving Average (ARMA) modelling and Ornstein–Uhlenbeck (OU) process.
Resumo:
Understanding hydrosedimental behavior of a watershed is essential for properly managing and using its hydric resources. The objective of this study was to verify the feasibility of the alternative procedure for the indirect determination of the sediment key curve using a turbidimeter. The research was carried out on the São Francisco Falso River, which is situated in the west of the state of Paraná on the left bank of ITAIPU reservoir. The direct method was applied using a DH-48 sediment suspended sampler. The indirect method consisted of the use of a linigraph and a turbidimeter. Based on the results obtained, it was concluded that the indirect method using a turbidimeter showed to be fully feasible, since it gave a power function-type mathematical model equal of the direct method. Furthermore, the average suspended sediment discharge into the São Francisco Falso River during the 2006/2007 harvest was calculated at 7.26 metric t day-1.
Resumo:
Due to the lack of information concerning maximum rainfall equations for most locations in Mato Grosso do Sul State, the alternative for carrying out hydraulic work projects has been information from meteorological stations closest to the location in which the project is carried out. Alternative methods, such as 24 hours rain disaggregation method from rainfall data due to greater availability of stations and longer observations can work. Based on this approach, the objective of this study was to estimate maximum rainfall equations for Mato Grosso do Sul State by adjusting the 24 hours rain disaggregation method, depending on data obtained from rain gauge stations from Dourado and Campo Grande. For this purpose, data consisting of 105 rainfall stations were used, which are available in the ANA (Water Resources Management National Agency) database. Based on the results we concluded: the intense rainfall equations obtained by pluviogram analysis showed determination coefficient above 99%; and the performance of 24 hours rain disaggregation method was classified as excellent, based on relative average error WILMOTT concordance index (1982).
Resumo:
The aim of this study was to compare two methods of tear sampling for protein quantification. Tear samples were collected from 29 healthy dogs (58 eyes) using Schirmer tear test (STT) strip and microcapillary tubes. The samples were frozen at -80ºC and analyzed by the Bradford method. Results were analyzed by Student's t test. The average protein concentration and standard deviation from tears collected with microcapillary tube were 4.45mg/mL ±0.35 and 4,52mg/mL ±0.29 for right and left eyes respectively. The average protein concentration and standard deviation from tears collected with Schirmer Tear Test (STT) strip were and 54.5mg/mL ±0.63 and 54.15mg/mL ±0.65 to right and left eyes respectively. Statistically significant differences (p<0.001) were found between the methods. In the conditions in which this study was conducted, the average protein concentration obtained with the Bradford test from tear samples obtained by Schirmer Tear Test (STT) strip showed values higher than those obtained with microcapillary tube. It is important that concentration of tear protein pattern values should be analyzed according the method used to collect tear samples.
Resumo:
Abstract: Platelet-rich plasma (PRP) is a product easy and inxpesnsive, and stands out to for its growth factors in tissue repair. To obtain PRP, centrifugation of whole blood is made with specific time and gravitational forces. Thus, the present work aimed to study a method of double centrifugation to obtain PRP in order to evaluate the effective increase of platelet concentration in the final product, the preparation of PRP gel, and to optimize preparation time of the final sample. Fifteen female White New Zealand rabbits underwent blood sampling for the preparation of PRP. Samples were separated in two sterile tubes containing sodium citrate. Tubes were submitted to the double centrifugation protocol, with lid closed and 1600 revolutions per minute (rpm) for 10 minutes, resulting in the separation of red blood cells, plasma with platelets and leucocytes. After were opened and plasma was pipetted and transferred into another sterile tube. Plasma was centrifuged again at 2000rpm for 10 minutes; as a result it was split into two parts: on the top, consisting of platelet-poor plasma (PPP) and at the bottom of the platelet button. Part of the PPP was discarded so that only 1ml remained in the tube along with the platelet button. This material was gently agitated to promote platelets resuspension and activated when added 0.3ml of calcium gluconate, resulting in PRP gel. Double centrifugation protocol was able to make platelet concentration 3 times higher in relation to the initial blood sample. The volume of calcium gluconate used for platelet activation was 0.3ml, and was sufficient to coagulate the sample. Coagulation time ranged from 8 to 20 minutes, with an average of 17.6 minutes. Therefore, time of blood centrifugation until to obtain PRP gel took only 40 minutes. It was concluded that PRP was successfully obtained by double centrifugation protocol, which is able to increase the platelet concentration in the sample compared with whole blood, allowing its use in surgical procedures. Furthermore, the preparation time is appropriate to obtain PRP in just 40 minutes, and calcium gluconate is able to promote the activation of platelets.
Resumo:
Interest towards working capital management increased among practitioners and researchers because the financial crisis of 2008 caused the deterioration of the general financial situation. The importance of managing working capital effectively increased dramatically during the financial crisis. On one hand, companies highlighted the importance of working capital management as part of short-term financial management to overcome funding difficulties. On the other hand, in academia, it has been highlighted the need to analyze working capital management from a wider perspective namely from the value chain perspective. Previously, academic articles mostly discussed working capital management from a company-centered perspective. The objective of this thesis was to put working capital management in a wider and more academic perspective and present case studies of the value chains of industries as instrumental in theoretical contributions and practical contributions as complementary to theoretical contributions and conclusions. The principal assumption of this thesis is that selffinancing of value chains can be established through effective working capital management. Thus, the thesis introduces the financial value chain analysis method which is employed in the empirical studies. The effectiveness of working capital management of the value chains is studied through the cycle time of working capital. The financial value chain analysis method employed in this study is designed for considering value chain level phenomena. This method provides a holistic picture of the value chain through financial figures. It extends the value chain analysis to the industry level. Working capital management is studied by the cash conversion cycle that measures the length (days) of time a company has funds tied up in working capital, starting from the payment of purchases to the supplier and ending when remittance of sales is received from the customers. The working capital management practices employed in the automotive, pulp and paper and information and communication technology industries have been studied in this research project. Additionally, the Finnish pharmaceutical industry is studied to obtain a deeper understanding of the working capital management of the value chain. The results indicate that the cycle time of working capital is constant in the value chain context over time. The cash conversion cycle of automotive, pulp and paper, and ICT industries are on average 70, 60 and 40 days, respectively. The difference is mainly a consequence of the different cycle time of inventories. The financial crisis of 2008 affected the working capital management of the industries similarly. Both the cycle time of accounts receivable and accounts payable increased between 2008 and 2009. The results suggest that the companies of the automotive, pulp and paper and ICT value chains were not able to self-finance. Results do not indicate the improvement of value chains position in regard to working capital management either. The findings suggest that companies operating in the Finnish pharmaceutical industry are interested in developing their own working capital management, but collaboration with the value chain partners is not considered interesting. Competition no longer occurs between individual companies, but between value chains. Therefore the financial value chain analysis method introduced in this thesis has the potential to support value chains in improving their competitiveness.
Resumo:
Pre-publication drafts are reproduced with permission and copyright © 2013 of the Journal of Orthopaedic Trauma [Mutch J, Rouleau DM, Laflamme GY, Hagemeister N. Accurate Measurement of Greater Tuberosity Displacement without Computed Tomography: Validation of a method on Plain Radiography to guide Surgical Treatment. J Orthop Trauma. 2013 Nov 21: Epub ahead of print.] and copyright © 2014 of the British Editorial Society of Bone and Joint Surgery [Mutch JAJ, Laflamme GY, Hagemeister N, Cikes A, Rouleau DM. A new morphologic classification for greater tuberosity fractures of the proximal humerus: validation and clinical Implications. Bone Joint J 2014;96-B:In press.]