945 resultados para Calculated Partial Charges
Resumo:
The main object of the present paper consists in giving formulas and methods which enable us to determine the minimum number of repetitions or of individuals necessary to garantee some extent the success of an experiment. The theoretical basis of all processes consists essentially in the following. Knowing the frequency of the desired p and of the non desired ovents q we may calculate the frequency of all possi- ble combinations, to be expected in n repetitions, by expanding the binomium (p-+q)n. Determining which of these combinations we want to avoid we calculate their total frequency, selecting the value of the exponent n of the binomium in such a way that this total frequency is equal or smaller than the accepted limit of precision n/pª{ 1/n1 (q/p)n + 1/(n-1)| (q/p)n-1 + 1/ 2!(n-2)| (q/p)n-2 + 1/3(n-3) (q/p)n-3... < Plim - -(1b) There does not exist an absolute limit of precision since its value depends not only upon psychological factors in our judgement, but is at the same sime a function of the number of repetitions For this reasen y have proposed (1,56) two relative values, one equal to 1-5n as the lowest value of probability and the other equal to 1-10n as the highest value of improbability, leaving between them what may be called the "region of doubt However these formulas cannot be applied in our case since this number n is just the unknown quantity. Thus we have to use, instead of the more exact values of these two formulas, the conventional limits of P.lim equal to 0,05 (Precision 5%), equal to 0,01 (Precision 1%, and to 0,001 (Precision P, 1%). The binominal formula as explained above (cf. formula 1, pg. 85), however is of rather limited applicability owing to the excessive calculus necessary, and we have thus to procure approximations as substitutes. We may use, without loss of precision, the following approximations: a) The normal or Gaussean distribution when the expected frequency p has any value between 0,1 and 0,9, and when n is at least superior to ten. b) The Poisson distribution when the expected frequecy p is smaller than 0,1. Tables V to VII show for some special cases that these approximations are very satisfactory. The praticai solution of the following problems, stated in the introduction can now be given: A) What is the minimum number of repititions necessary in order to avoid that any one of a treatments, varieties etc. may be accidentally always the best, on the best and second best, or the first, second, and third best or finally one of the n beat treatments, varieties etc. Using the first term of the binomium, we have the following equation for n: n = log Riim / log (m:) = log Riim / log.m - log a --------------(5) B) What is the minimun number of individuals necessary in 01der that a ceratin type, expected with the frequency p, may appaer at least in one, two, three or a=m+1 individuals. 1) For p between 0,1 and 0,9 and using the Gaussean approximation we have: on - ó. p (1-p) n - a -1.m b= δ. 1-p /p e c = m/p } -------------------(7) n = b + b² + 4 c/ 2 n´ = 1/p n cor = n + n' ---------- (8) We have to use the correction n' when p has a value between 0,25 and 0,75. The greek letters delta represents in the present esse the unilateral limits of the Gaussean distribution for the three conventional limits of precision : 1,64; 2,33; and 3,09 respectively. h we are only interested in having at least one individual, and m becomes equal to zero, the formula reduces to : c= m/p o para a = 1 a = { b + b²}² = b² = δ2 1- p /p }-----------------(9) n = 1/p n (cor) = n + n´ 2) If p is smaller than 0,1 we may use table 1 in order to find the mean m of a Poisson distribution and determine. n = m: p C) Which is the minimun number of individuals necessary for distinguishing two frequencies p1 and p2? 1) When pl and p2 are values between 0,1 and 0,9 we have: n = { δ p1 ( 1-pi) + p2) / p2 (1 - p2) n= 1/p1-p2 }------------ (13) n (cor) We have again to use the unilateral limits of the Gaussean distribution. The correction n' should be used if at least one of the valors pl or p2 has a value between 0,25 and 0,75. A more complicated formula may be used in cases where whe want to increase the precision : n (p1 - p2) δ { p1 (1- p2 ) / n= m δ = δ p1 ( 1 - p1) + p2 ( 1 - p2) c= m / p1 - p2 n = { b2 + 4 4 c }2 }--------- (14) n = 1/ p1 - p2 2) When both pl and p2 are smaller than 0,1 we determine the quocient (pl-r-p2) and procure the corresponding number m2 of a Poisson distribution in table 2. The value n is found by the equation : n = mg /p2 ------------- (15) D) What is the minimun number necessary for distinguishing three or more frequencies, p2 p1 p3. If the frequecies pl p2 p3 are values between 0,1 e 0,9 we have to solve the individual equations and sue the higest value of n thus determined : n 1.2 = {δ p1 (1 - p1) / p1 - p2 }² = Fiim n 1.2 = { δ p1 ( 1 - p1) + p1 ( 1 - p1) }² } -- (16) Delta represents now the bilateral limits of the : Gaussean distrioution : 1,96-2,58-3,29. 2) No table was prepared for the relatively rare cases of a comparison of threes or more frequencies below 0,1 and in such cases extremely high numbers would be required. E) A process is given which serves to solve two problemr of informatory nature : a) if a special type appears in n individuals with a frequency p(obs), what may be the corresponding ideal value of p(esp), or; b) if we study samples of n in diviuals and expect a certain type with a frequency p(esp) what may be the extreme limits of p(obs) in individual farmlies ? I.) If we are dealing with values between 0,1 and 0,9 we may use table 3. To solve the first question we select the respective horizontal line for p(obs) and determine which column corresponds to our value of n and find the respective value of p(esp) by interpolating between columns. In order to solve the second problem we start with the respective column for p(esp) and find the horizontal line for the given value of n either diretly or by approximation and by interpolation. 2) For frequencies smaller than 0,1 we have to use table 4 and transform the fractions p(esp) and p(obs) in numbers of Poisson series by multiplication with n. Tn order to solve the first broblem, we verify in which line the lower Poisson limit is equal to m(obs) and transform the corresponding value of m into frequecy p(esp) by dividing through n. The observed frequency may thus be a chance deviate of any value between 0,0... and the values given by dividing the value of m in the table by n. In the second case we transform first the expectation p(esp) into a value of m and procure in the horizontal line, corresponding to m(esp) the extreme values om m which than must be transformed, by dividing through n into values of p(obs). F) Partial and progressive tests may be recomended in all cases where there is lack of material or where the loss of time is less importent than the cost of large scale experiments since in many cases the minimun number necessary to garantee the results within the limits of precision is rather large. One should not forget that the minimun number really represents at the same time a maximun number, necessary only if one takes into consideration essentially the disfavorable variations, but smaller numbers may frequently already satisfactory results. For instance, by definition, we know that a frequecy of p means that we expect one individual in every total o(f1-p). If there were no chance variations, this number (1- p) will be suficient. and if there were favorable variations a smaller number still may yield one individual of the desired type. r.nus trusting to luck, one may start the experiment with numbers, smaller than the minimun calculated according to the formulas given above, and increase the total untill the desired result is obtained and this may well b ebefore the "minimum number" is reached. Some concrete examples of this partial or progressive procedure are given from our genetical experiments with maize.
Resumo:
White micas in carbonate-rich tectonites and a few other rock types of large thrusts in the Swiss Helvetic fold-and-thrust belt have been analyzed by Ar-40/Ar-39 and Rb/Sr techniques to better constrain the timing of Alpine deformation for this region. Incremental Ar-40/Ar-39 heating experiments of 25 weakly metamorphosed (anchizone to low greenschist) samples yield plateau and staircase spectra. We interpret most of the staircase release spectra result from variable mixtures of syntectonic (neoformed) and detrital micas. The range in dates obtained within individual spectra depends primarily on the duration of mica nucleation and growth, and relative proportions of neoformed and detrital mica. Rb/Sr analyses of 12 samples yield dates of ca. 10-39 Ma (excluding one anomalously young sample). These dates are slightly younger than the Ar-40/Ar-39 total gas dates obtained for the same samples. The Rb/ Sr dates were calculated using initial Sr-87/Sr-86 ratios obtained from the carbonate-dominated host rocks, which are higher than normal Mesozoic carbonate values due to exchange with fluids of higher Sr-87/Sr-86 ratios (and lower O-18/O-16 ratios). Model dates calculated using Sr-87/Sr-86 values typical of Mesozoic marine carbonates more closely approximate the Ar-40/Ar-39 total gas dates for most of the samples. The similarities of Rb/Sr and Ar-40/Ar-39 total gas dates are consistent with limited amounts of detrital mica in the samples. The delta(18)O values range from 24-15%. (VSMOW) for 2-6 mum micas and 27-16parts per thousand for the carbonate host rocks. The carbonate values are significantly lower than their protolith values due to localized fluid-rock interaction and fluid flow along most thrust surfaces. Although most calcite-mica pairs are not in oxygen isotope equilibrium at temperatures of ca. 200-400 degreesC, their isotopic fractionations are indicative of either 1) partial exchange between the minerals and a common external fluid, or 2) growth or isotopic exchange of the mica with the carbonate after the carbonate had isotopically exchanged with an external fluid. The geological significance of these results is not easily or uniquely determined, and exemplifies the difficulties inherent in dating very fine-grained micas of highly deformed tectonites in low-grade metamorphic terranes. Two generalizations can be made regarding the dates obtained from the Helvetic thrusts: 1) samples from the two highest thrusts (Mt. Gond and Sublage) have all of their Ar-40/Ar-39 steps above 20 Ma, and 2) most samples from the deepest Helvetic thrusts have steps (often accounting for more than 80% of Ar-39 release) between 15 and 25 Ma. These dates are consistent with the order of thrusting in the foreland-imbricating system and increase proportions of neoformed to detrital mica in the more metamorphosed hinterland and deeply buried portions of the nappe pile. Individual thrusts accommodated the majority of their displacement during their initial incorporation into the foreland-imbricating system, and some thrusts remained active or were reactivated down to 15 Ma.
Resumo:
In traditional criminal investigation, uncertainties are often dealt with using a combination of common sense, practical considerations and experience, but rarely with tailored statistical models. For example, in some countries, in order to search for a given profile in the national DNA database, it must have allelic information for six or more of the ten SGM Plus loci for a simple trace. If the profile does not have this amount of information then it cannot be searched in the national DNA database (NDNAD). This requirement (of a result at six or more loci) is not based on a statistical approach, but rather on the feeling that six or more would be sufficient. A statistical approach, however, could be more rigorous and objective and would take into consideration factors such as the probability of adventitious matches relative to the actual database size and/or investigator's requirements in a sensible way. Therefore, this research was undertaken to establish scientific foundations pertaining to the use of partial SGM Plus loci profiles (or similar) for investigation.
Resumo:
The FIT trial was conducted to evaluate the safety and efficacy of 90Y-ibritumomab tiuxetan (0.4 mCi/kg; maximum dose 32 mCi) when used as consolidation of first complete or partial remission in patients with previously untreated, advanced-stage follicular lymphoma (FL). Patients were randomly assigned to either 90Y-ibritumomab treatment (n = 207) or observation (n = 202) within 3 months (mo) of completing initial induction therapy (chemotherapy only: 86%; rituximab in combination with chemotherapy: 14%). Response status prior to randomization did not differ between the groups: 52% complete response (CR)/CR unconfirmed (CRu) to induction therapy and 48% partial response (PR) in the 90Y-ibritumomab arm vs 53% CR/CRu and 44% PR in the control arm. The primary endpoint was progression-free survival (PFS) of the intent-to-treat (ITT) population. Results from the first extended follow-up after a median of 3.5 years revealed a significant improvement in PFS from the time of randomization with 90Y-ibritumomab consolidation compared with control (36.5 vs 13.3 mo, respectively; P < 0.0001; Morschhauser et al. JCO. 2008; 26:5156-5164). Here we report a median follow-up of 66.2 mo (5.5 years). Five-year PFS was 47% in the 90Y-ibritumomab group and 29% in the control group (hazard ratio (HR) = 0.51, 95% CI 0.39-0.65; P < 0.0001). Median PFS in the 90Y-ibritumomab group was 49 mo vs 14 mo in the control group. In patients achieving a CR/CRu after induction, 5-year PFS was 57% in the 90Y-ibritumomab group, and the median had not yet been reached at 92 months, compared with a 43% 5-year PFS in the control group and a median of 31 mo (HR = 0.61, 95% CI 0.42-0.89). For patients in PR after induction, the 5-year PFS was 38% in the 90Y-ibritumomab group with a median PFS of 30 mo vs 14% in the control group with a median PFS of 6 mo (HR = 0.38, 95% CI 0.27-0.53). Patients who had received rituximab as part of induction treatment had a 5-year PFS of 64% in the 90Y-ibritumomab group and 48% in the control group (HR = 0.66, 95% CI 0.30-1.47). For all patients, time to next treatment (as calculated from the date of randomization) differed significantly between both groups; median not reached at 99 mo in the 90Y-ibritumomab group vs 35 mo in the control group (P < 0.0001). The majority of patients received rituximab-containing regimens when treated after progression (63/82 [77%] in the 90Y-ibritumomab group and 102/122 [84%] in the control group). Overall response rate to second-line treatment was 79% in the 90Y-ibritumomab group (57% CR/CRu and 22% PR) vs 78% in the control arm (59% CR/CRu, 19% PR). Five-year overall survival was not significantly different between the groups; 93% and 89% in the 90Y-ibritumomab and control groups, respectively (P = 0.561). To date, 40 patients have died; 18 in the 90Y-ibritumomab group and 22 in the control group. Secondary malignancies were diagnosed in 16 patients in the 90Y-ibritumomab arm vs 9 patients in the control arm (P = 0.19). There were 6 (3%) cases of myelodysplastic syndrome (MDS)/acute myelogenous leukemia (AML) in the 90Y-ibritumomab arm vs 1 MDS in the control arm (P = 0.063). In conclusion, this extended follow-up of the FIT trial confirms the benefit of 90Y-ibritumomab consolidation with a nearly 3 year advantage in median PFS. A significant 5-year PFS improvement was confirmed for patients with a CR/CRu or a PR after induction. Effective rescue treatment with rituximab-containing regimens may explain the observed no difference in overall survival between both patient groups who were - for the greater part - rituximab-naïve.
Resumo:
Purpose: Surgery remains the treatment of choice for localized renal neoplasms. While radical nephrectomy was long considered the gold standard, partial nephrectomy has equivalent oncological results for small tumors. The role of negative surgical margins continues to be debated. Intraoperative frozen section analysis is expensive and time-consuming. We assessed the feasibility of intraoperative ex vivo ultrasound of resection margins in patients undergoing partial nephrectomy and its correlation with margin status on definitive pathological evaluation.Materials and Methods: A study was done at 2 institutions from February 2008 to March 2011. Patients undergoing partial nephrectomy for T1-T2 renal tumors were included in analysis. Partial nephrectomy was done by a standardized minimal healthy tissue margin technique. After resection the specimen was kept in saline and tumor margin status was immediately determined by ex vivo ultrasound. Sequential images were obtained to evaluate the whole tumor pseudocapsule. Results were compared with margin status on definitive pathological evaluation.Results: A total of 19 men and 14 women with a mean +/- SD age of 62 +/- 11 years were included in analysis. Intraoperative ex vivo ultrasound revealed negative surgical margins in 30 cases and positive margins in 2 while it could not be done in 1. Final pathological results revealed negative margins in all except 1 case. Ultrasound sensitivity and specificity were 100% and 97%, respectively. Median ultrasound duration was 1 minute. Mean tumor and margin size was 3.6 +/- 2.2 cm and 1.5 +/- 0.7 mm, respectively.Conclusions: Intraoperative ex vivo ultrasound of resection margins in patients undergoing partial nephrectomy is feasible and efficient. Large sample studies are needed to confirm its promising accuracy to determine margin status.
Resumo:
This paper suggests a simple method based on Chebyshev approximation at Chebyshev nodes to approximate partial differential equations. The methodology simply consists in determining the value function by using a set of nodes and basis functions. We provide two examples. Pricing an European option and determining the best policy for chatting down a machinery. The suggested method is flexible, easy to program and efficient. It is also applicable in other fields, providing efficient solutions to complex systems of partial differential equations.
Resumo:
Rebound is the extent to which improvements in energy efficiency fail to translate fully into reductions in energy use because of the implicit fall in the price of energy, when measured in efficiency units. This paper discusses aspects of the rebound effect that are introduced once energy is considered as a domestically produced commodity. A partial equilibrium approach is adopted in order to incorporate both energy use and production in a conceptually tractable way. The paper explores analytically two interesting results revealed in previous numerical simulations. The first is the possibility that energy use could fall by more than the implied improvement in efficiency. This corresponds to negative rebound. The second is the finding that the short-run rebound value can be greater than the corresponding long-run value.
Resumo:
Untreated wastewater being directly discharged into rivers is a very harmful environmental hazard that needs to be tackled urgently in many countries. In order to safeguard the river ecosystem and reduce water pollution, it is important to have an effluent charge policy that promotes the investment of wastewater treatment technology by domestic firms. This paper considers the strategic interaction between the government and the domestic firms regarding the investment in the wastewater treatment technology and the design of optimal effluent charge policy that should be implemented. In this model, the higher is the proportion of non-investing firms, the higher would be the probability of having to incur an effluent charge and the higher would be that charge. On one hand the government needs to impose a sufficiently strict policy to ensure that firms have strong incentive to invest. On the other hand, it cannot be too strict that it drives out firms which cannot afford to invest in such expensive technology. The paper analyses the factors that affect the probability of investment in this technology. It also explains the difficulty of imposing a strict environment policy in countries that have too many small firms which cannot afford to invest unless subsidised.
Resumo:
We analyse a labour matching model with wage posting, where- refl ecting institutional constraints-fi rms cannot dfferentiate their wage offers within certain subsets of workers. Inter alia, we find that the presence of impersonal wage offers leads to wage compression, which propagates to the wages for high productivity workers who receive personalised offers.
Resumo:
In the context of the two-stage threshold model of decision making, with the agent’s choices determined by the interaction Of three “structural variables,” we study the restrictions on behavior that arise when one or more variables are xogenously known. Our results supply necessary and sufficient conditions for consistency with the model for all possible states of partial Knowledge, and for both single- and multivalued choice functions.
Resumo:
PURPOSE: This study investigated maximal cardiometabolic response while running in a lower body positive pressure treadmill (antigravity treadmill (AG)), which reduces body weight (BW) and impact. The AG is used in rehabilitation of injuries but could have potential for high-speed running, if workload is maximally elevated. METHODS: Fourteen trained (nine male) runners (age 27 ± 5 yr; 10-km personal best, 38.1 ± 1.1 min) completed a treadmill incremental test (CON) to measure aerobic capacity and heart rate (V˙O2max and HRmax). They completed four identical tests (48 h apart, randomized order) on the AG at BW of 100%, 95%, 90%, and 85% (AG100 to AG85). Stride length and rate were measured at peak velocities (Vpeak). RESULTS: V˙O2max (mL·kg·min) was similar across all conditions (men: CON = 66.6 (3.0), AG100 = 65.6 (3.8), AG95 = 65.0 (5.4), AG90 = 65.6 (4.5), and AG85 = 65.0 (4.8); women: CON = 63.0 (4.6), AG100 = 61.4 (4.3), AG95 = 60.7 (4.8), AG90 = 61.4 (3.3), and AG85 = 62.8 (3.9)). Similar results were found for HRmax, except for AG85 in men and AG100 and AG90 in women, which were lower than CON. Vpeak (km·h) in men was 19.7 (0.9) in CON, which was lower than every other condition: AG100 = 21.0 (1.9) (P < 0.05), AG95 = 21.4 (1.8) (P < 0.01), AG90 = 22.3 (2.1) (P < 0.01), and AG85 = 22.6 (1.6) (P < 0.001). In women, Vpeak (km·h) was similar between CON (17.8 (1.1) ) and AG100 (19.3 (1.0)) but higher at AG95 = 19.5 (0.4) (P < 0.05), AG90 = 19.5 (0.8) (P < 0.05), and AG85 = 21.2 (0.9) (P < 0.01). CONCLUSIONS: The AG can be used at maximal exercise intensities at BW of 85% to 95%, reaching faster running speeds than normally feasible. The AG could be used for overspeed running programs at the highest metabolic response levels.