973 resultados para cost comparison
Resumo:
Background:The ACUITY and CRUSADE scores are validated models for prediction of major bleeding events in acute coronary syndrome (ACS). However, the comparative performances of these scores are not known.Objective:To compare the accuracy of ACUITY and CRUSADE in predicting major bleeding events during ACS.Methods:This study included 519 patients consecutively admitted for unstable angina, non-ST-elevation or ST-elevation myocardial infarction. The scores were calculated based on admission data. We considered major bleeding events during hospitalization and not related to cardiac surgery, according to the Bleeding Academic Research Consortium (BARC) criteria (type 3 or 5: hemodynamic instability, need for transfusion, drop in hemoglobin ≥ 3 g, and intracranial, intraocular or fatal bleeding).Results:Major bleeding was observed in 31 patients (23 caused by femoral puncture, 5 digestive, 3 in other sites), an incidence of 6%. While both scores were associated with bleeding, ACUITY demonstrated better C-statistics (0.73, 95% CI = 0.63 - 0.82) as compared with CRUSADE (0.62, 95% CI = 0.53 - 0.71; p = 0.04). The best performance of ACUITY was also reflected by a net reclassification improvement of + 0.19 (p = 0.02) over CRUSADE’s definition of low or high risk. Exploratory analysis suggested that the presence of the variables ‘age’ and ‘type of ACS’ in ACUITY was the main reason for its superiority.Conclusion:The ACUITY Score is a better predictor of major bleeding when compared with the CRUSADE Score in patients hospitalized for ACS.
Resumo:
Background:Polypharmacy is a significant economic burden.Objective:We tested whether using reverse auction (RA) as compared with commercial pharmacy (CP) to purchase medicine results in lower pharmaceutical costs for heart failure (HF) and heart transplantation (HT) outpatients.Methods:We compared the costs via RA versus CP in 808 HF and 147 HT patients followed from 2009 through 2011, and evaluated the influence of clinical and demographic variables on cost.Results:The monthly cost per patient for HF drugs acquired via RA was $10.15 (IQ 3.51-40.22) versus $161.76 (IQ 86.05‑340.15) via CP; for HT, those costs were $393.08 (IQ 124.74-774.76) and $1,207.70 (IQ 604.48-2,499.97), respectively.Conclusion:RA may reduce the cost of prescription drugs for HF and HT, potentially making HF treatment more accessible. Clinical characteristics can influence the cost and benefits of RA. RA may be a new health policy strategy to reduce costs of prescribed medications for HF and HT patients, reducing the economic burden of treatment.
Resumo:
AbstractBackground:Risk scores for cardiac surgery cannot continue to be neglected.Objective:To assess the performance of “Age, Creatinine and Ejection Fraction Score” (ACEF Score) to predict mortality in patients submitted to elective coronary artery bypass graft and/or heart valve surgery, and to compare it to other scores.Methods:A prospective cohort study was carried out with the database of a Brazilian tertiary care center. A total of 2,565 patients submitted to elective surgeries between May 2007 and July 2009 were assessed. For a more detailed analysis, the ACEF Score performance was compared to the InsCor’s and EuroSCORE’s performance through correlation, calibration and discrimination tests.Results:Patients were stratified into mild, moderate and severe for all models. Calibration was inadequate for ACEF Score (p = 0.046) and adequate for InsCor (p = 0.460) and EuroSCORE (p = 0.750). As for discrimination, the area under the ROC curve was questionable for the ACEF Score (0.625) and adequate for InsCor (0.744) and EuroSCORE (0.763).Conclusion:Although simple to use and practical, the ACEF Score, unlike InsCor and EuroSCORE, was not accurate for predicting mortality in patients submitted to elective coronary artery bypass graft and/or heart valve surgery in a Brazilian tertiary care center. (Arq Bras Cardiol. 2015; [online].ahead print, PP.0-0)
Resumo:
The main object of the present paper consists in giving formulas and methods which enable us to determine the minimum number of repetitions or of individuals necessary to garantee some extent the success of an experiment. The theoretical basis of all processes consists essentially in the following. Knowing the frequency of the desired p and of the non desired ovents q we may calculate the frequency of all possi- ble combinations, to be expected in n repetitions, by expanding the binomium (p-+q)n. Determining which of these combinations we want to avoid we calculate their total frequency, selecting the value of the exponent n of the binomium in such a way that this total frequency is equal or smaller than the accepted limit of precision n/pª{ 1/n1 (q/p)n + 1/(n-1)| (q/p)n-1 + 1/ 2!(n-2)| (q/p)n-2 + 1/3(n-3) (q/p)n-3... < Plim - -(1b) There does not exist an absolute limit of precision since its value depends not only upon psychological factors in our judgement, but is at the same sime a function of the number of repetitions For this reasen y have proposed (1,56) two relative values, one equal to 1-5n as the lowest value of probability and the other equal to 1-10n as the highest value of improbability, leaving between them what may be called the "region of doubt However these formulas cannot be applied in our case since this number n is just the unknown quantity. Thus we have to use, instead of the more exact values of these two formulas, the conventional limits of P.lim equal to 0,05 (Precision 5%), equal to 0,01 (Precision 1%, and to 0,001 (Precision P, 1%). The binominal formula as explained above (cf. formula 1, pg. 85), however is of rather limited applicability owing to the excessive calculus necessary, and we have thus to procure approximations as substitutes. We may use, without loss of precision, the following approximations: a) The normal or Gaussean distribution when the expected frequency p has any value between 0,1 and 0,9, and when n is at least superior to ten. b) The Poisson distribution when the expected frequecy p is smaller than 0,1. Tables V to VII show for some special cases that these approximations are very satisfactory. The praticai solution of the following problems, stated in the introduction can now be given: A) What is the minimum number of repititions necessary in order to avoid that any one of a treatments, varieties etc. may be accidentally always the best, on the best and second best, or the first, second, and third best or finally one of the n beat treatments, varieties etc. Using the first term of the binomium, we have the following equation for n: n = log Riim / log (m:) = log Riim / log.m - log a --------------(5) B) What is the minimun number of individuals necessary in 01der that a ceratin type, expected with the frequency p, may appaer at least in one, two, three or a=m+1 individuals. 1) For p between 0,1 and 0,9 and using the Gaussean approximation we have: on - ó. p (1-p) n - a -1.m b= δ. 1-p /p e c = m/p } -------------------(7) n = b + b² + 4 c/ 2 n´ = 1/p n cor = n + n' ---------- (8) We have to use the correction n' when p has a value between 0,25 and 0,75. The greek letters delta represents in the present esse the unilateral limits of the Gaussean distribution for the three conventional limits of precision : 1,64; 2,33; and 3,09 respectively. h we are only interested in having at least one individual, and m becomes equal to zero, the formula reduces to : c= m/p o para a = 1 a = { b + b²}² = b² = δ2 1- p /p }-----------------(9) n = 1/p n (cor) = n + n´ 2) If p is smaller than 0,1 we may use table 1 in order to find the mean m of a Poisson distribution and determine. n = m: p C) Which is the minimun number of individuals necessary for distinguishing two frequencies p1 and p2? 1) When pl and p2 are values between 0,1 and 0,9 we have: n = { δ p1 ( 1-pi) + p2) / p2 (1 - p2) n= 1/p1-p2 }------------ (13) n (cor) We have again to use the unilateral limits of the Gaussean distribution. The correction n' should be used if at least one of the valors pl or p2 has a value between 0,25 and 0,75. A more complicated formula may be used in cases where whe want to increase the precision : n (p1 - p2) δ { p1 (1- p2 ) / n= m δ = δ p1 ( 1 - p1) + p2 ( 1 - p2) c= m / p1 - p2 n = { b2 + 4 4 c }2 }--------- (14) n = 1/ p1 - p2 2) When both pl and p2 are smaller than 0,1 we determine the quocient (pl-r-p2) and procure the corresponding number m2 of a Poisson distribution in table 2. The value n is found by the equation : n = mg /p2 ------------- (15) D) What is the minimun number necessary for distinguishing three or more frequencies, p2 p1 p3. If the frequecies pl p2 p3 are values between 0,1 e 0,9 we have to solve the individual equations and sue the higest value of n thus determined : n 1.2 = {δ p1 (1 - p1) / p1 - p2 }² = Fiim n 1.2 = { δ p1 ( 1 - p1) + p1 ( 1 - p1) }² } -- (16) Delta represents now the bilateral limits of the : Gaussean distrioution : 1,96-2,58-3,29. 2) No table was prepared for the relatively rare cases of a comparison of threes or more frequencies below 0,1 and in such cases extremely high numbers would be required. E) A process is given which serves to solve two problemr of informatory nature : a) if a special type appears in n individuals with a frequency p(obs), what may be the corresponding ideal value of p(esp), or; b) if we study samples of n in diviuals and expect a certain type with a frequency p(esp) what may be the extreme limits of p(obs) in individual farmlies ? I.) If we are dealing with values between 0,1 and 0,9 we may use table 3. To solve the first question we select the respective horizontal line for p(obs) and determine which column corresponds to our value of n and find the respective value of p(esp) by interpolating between columns. In order to solve the second problem we start with the respective column for p(esp) and find the horizontal line for the given value of n either diretly or by approximation and by interpolation. 2) For frequencies smaller than 0,1 we have to use table 4 and transform the fractions p(esp) and p(obs) in numbers of Poisson series by multiplication with n. Tn order to solve the first broblem, we verify in which line the lower Poisson limit is equal to m(obs) and transform the corresponding value of m into frequecy p(esp) by dividing through n. The observed frequency may thus be a chance deviate of any value between 0,0... and the values given by dividing the value of m in the table by n. In the second case we transform first the expectation p(esp) into a value of m and procure in the horizontal line, corresponding to m(esp) the extreme values om m which than must be transformed, by dividing through n into values of p(obs). F) Partial and progressive tests may be recomended in all cases where there is lack of material or where the loss of time is less importent than the cost of large scale experiments since in many cases the minimun number necessary to garantee the results within the limits of precision is rather large. One should not forget that the minimun number really represents at the same time a maximun number, necessary only if one takes into consideration essentially the disfavorable variations, but smaller numbers may frequently already satisfactory results. For instance, by definition, we know that a frequecy of p means that we expect one individual in every total o(f1-p). If there were no chance variations, this number (1- p) will be suficient. and if there were favorable variations a smaller number still may yield one individual of the desired type. r.nus trusting to luck, one may start the experiment with numbers, smaller than the minimun calculated according to the formulas given above, and increase the total untill the desired result is obtained and this may well b ebefore the "minimum number" is reached. Some concrete examples of this partial or progressive procedure are given from our genetical experiments with maize.
Resumo:
In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniques for maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables, and an approach for performing parallel addition of N input symbols.
Resumo:
In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniquesfor maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables,and an approach for performing parallel addition of N input symbols.
Resumo:
This work is dedicated to comparison of open source as well as proprietary transport protocols for highspeed data transmission via IP networks. The contemporary common TCP needs significant improvement since it was developed as general-purpose transport protocol and firstly introduced four decades ago. In nowadays networks, TCP fits not all communication needs that society has. Caused of it another transport protocols have been developed and successfully used for e.g. Big Data movement. In scope of this research the following protocols have been investigated for its efficiency on 10Gbps links: UDT, RBUDP, MTP and RWTP. The protocols were tested under different impairments such as Round Trip Time up to 400 ms and packet losses up to 2%. Investigated parameters are the data rate under different conditions of the network, the CPU load by sender andreceiver during the experiments, size of feedback data, CPU usage per Gbps and the amount of feedback data per GiByte of effectively transmitted data. The best performance and fair resources consumption was observed by RWTP. From the opensource projects, the best behavior is showed by RBUDP.
Resumo:
Wireless mesh networks present an attractive communication solution for various research and industrial projects. However, in many cases, the appropriate preliminary calculations which allow predicting the network behavior have to be made before the actual deployment. For such purposes, network simulation environments emulating the real network operation are often used. Within this paper, a behavior comparison of real wireless mesh network (based on 802.11s amendment) and the simulated one has been performed. The main objective of this work is to measure performance parameters of a real 802.11s wireless mesh network (average UDP throughput and average one-way delay) and compare the derived results with characteristics of a simulated wireless mesh network created with the NS-3 network simulation tool. Then, the results from both networks are compared and the corresponding conclusion is made. The corresponding results were derived from simulation model and real-worldtest-bed, showing that the behavior of both networks is similar. It confirms that the NS-3 simulation model is accurate and can be used in further research studies.
Resumo:
Three isolated populations of the cricket Endecous itatibensis Rehn, 1918 had been analyzed to test the hypothesis of divergence in the calling song estridulation and to discuss its implications in the speciation process. The song registers were obtained from specimens in Corumbataí, Piraciacaba and Itatiba cities, state of São Paulo, Brazil. In the three analyzed populations, calling songs are composed by pairs of notes. Specimens from Corumbataí emit composed phrases with 3 to 18 pairs of notes, while those of Piracicaba and Itatiba have 1 to 3 pairs of notes. Inter- and intrapopulational variability in the frequencies of acoustic signals were detected. While recording the calling songs, other stridulations were registered, possibly from species of Orthoptera or Hemiptera. When the tapes were analyzed in the laboratory, it was verified that there was no overlap among the species' calling songs, in such a way that each species had their particular band of frequencies, avoiding interespecific acoustic competition.
Resumo:
This paper investigates the selection of governance forms in interfirm collaborations taking into account the predictions from transaction costs and property rights theories. Transaction costs arguments are often used to justify the introduction of hierarchical controls in collaborations, but the ownership dimension of going from “contracts” to “hierarchies” has been ignored in the past and with it the so called “costs of ownership”. The theoretical results, tested with a sample of collaborations in which participate Spanish firms, indicate that the cost of ownership may offset the benefits of hierarchical controls and therefore limit their diffusion. Evidence is also reported of possible complementarities between reputation effects and forms of ownership that go together with hierarchical controls (i.e. joint ventures), in contrast with the generally assumed substitutability between the two.
Resumo:
The Bank of Spain uses a unique auction format to sell government bonds, which can be seen as a hybrid of a uniform and a discriminatory auction. For winning bids above the average winning bid, buyers are charged the average winning bid, otherwise they pay their respective bids. We report on an experiment that compares this auction format to the discriminatory format, used in most other countries, and to the uniform format. Our design is based on a common value model with multi-unit supply and two-unit demand. The results show significantly higher revenue with the Spanish and the uniform formats than with the discriminatory one, while volatility of prices over time is significantly lower in the discriminatory format than in the Spanish and uniform cases. Actual price dispersion is significantly larger in the discriminatory than in the Spanish. Our data also exhibit the use of bid-spreading strategies in all three designs.
Resumo:
We study the relation between the number of firms and price-cost margins under price competition with uncertainty about competitors' costs. We present results of an experiment in which two, three and four identical firms repeatedly interact in this environment. In line with the theoretical prediction, market prices decrease with the number of firms, but on average stay above marginal costs. Pricing is less aggressive in duopolies than in triopolies and tetrapolies. However, independently from the number of firms, pricing is more aggressive than in the theoretical equilibrium. Both the absolute and the relative surpluses increase with the number of firms. Total surplus is close to the equilibrium level, since enhanced consumer surplus through lower prices is counteracted by occasional displacements of the most efficient firm in production.