976 resultados para parallel admission algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To evaluate the efficacy of a systematic model of care for patients with chest pain and no ST segment elevation in the emergency room. METHODS: From 1003 patients submitted to an algorithm diagnostic investigation by probability of acute ischemic syndrome. We analyzed 600 ones with no elevation of ST segment, then enrolled to diagnostic routes of median (route 2) and low probability (route 3) to ischemic syndrome. RESULTS: In route 2 we found 17% acute myocardial infarction and 43% unstable angina, whereas in route 3 the rates were 2% and 7%, respectively. Patients with normal/non--specific ECG had 6% probability of AMI whereas in those with negative first CKMB it was 7%; the association of the 2 data only reduced it to 4%. In patients in route 2 the diagnosis of AMI could only be ruled out with serial CKMB measurement up to 9 hours, while in route 3 it could be done in up to 3 hours. Thus, sensitivity and negative predictive value of admission CKMB for AMI were 52% and 93%, respectively. About one-half of patients with unstable angina did not disclose objective ischemic changes on admission. CONCLUSION: The use of a systematic model of care in patients with chest pain offers the opportunity of hindering inappropriate release of patients with ACI and reduces unnecessary admissions. However some patients even with normal ECG should not be released based on a negative first CKMB. Serial measurement of CKMB up to 9 hours is necessary in patients with medium probability of AMI.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

"A workshop within the 19th International Conference on Applications and Theory of Petri Nets - ICATPN’1998"

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Magdeburg, Univ., Fak. für Verfahrens- und Systemtechnik, Diss., 2012

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background:Vascular remodeling, the dynamic dimensional change in face of stress, can assume different directions as well as magnitudes in atherosclerotic disease. Classical measurements rely on reference to segments at a distance, risking inappropriate comparison between dislike vessel portions.Objective:to explore a new method for quantifying vessel remodeling, based on the comparison between a given target segment and its inferred normal dimensions.Methods:Geometric parameters and plaque composition were determined in 67 patients using three-vessel intravascular ultrasound with virtual histology (IVUS-VH). Coronary vessel remodeling at cross-section (n = 27.639) and lesion (n = 618) levels was assessed using classical metrics and a novel analytic algorithm based on the fractional vessel remodeling index (FVRI), which quantifies the total change in arterial wall dimensions related to the estimated normal dimension of the vessel. A prediction model was built to estimate the normal dimension of the vessel for calculation of FVRI.Results:According to the new algorithm, “Ectatic” remodeling pattern was least common, “Complete compensatory” remodeling was present in approximately half of the instances, and “Negative” and “Incomplete compensatory” remodeling types were detected in the remaining. Compared to a traditional diagnostic scheme, FVRI-based classification seemed to better discriminate plaque composition by IVUS-VH.Conclusion:Quantitative assessment of coronary remodeling using target segment dimensions offers a promising approach to evaluate the vessel response to plaque growth/regression.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract Background: BNP has been extensively evaluated to determine short- and intermediate-term prognosis in patients with acute coronary syndrome, but its role in long-term mortality is not known. Objective: To determine the very long-term prognostic role of B-type natriuretic peptide (BNP) for all-cause mortality in patients with non-ST segment elevation acute coronary syndrome (NSTEACS). Methods: A cohort of 224 consecutive patients with NSTEACS, prospectively seen in the Emergency Department, had BNP measured on arrival to establish prognosis, and underwent a median 9.34-year follow-up for all-cause mortality. Results: Unstable angina was diagnosed in 52.2%, and non-ST segment elevation myocardial infarction, in 47.8%. Median admission BNP was 81.9 pg/mL (IQ range = 22.2; 225) and mortality rate was correlated with increasing BNP quartiles: 14.3; 16.1; 48.2; and 73.2% (p < 0.0001). ROC curve disclosed 100 pg/mL as the best BNP cut-off value for mortality prediction (area under the curve = 0.789, 95% CI= 0.723-0.854), being a strong predictor of late mortality: BNP < 100 = 17.3% vs. BNP ≥ 100 = 65.0%, RR = 3.76 (95% CI = 2.49-5.63, p < 0.001). On logistic regression analysis, age >72 years (OR = 3.79, 95% CI = 1.62-8.86, p = 0.002), BNP ≥ 100 pg/mL (OR = 6.24, 95% CI = 2.95-13.23, p < 0.001) and estimated glomerular filtration rate (OR = 0.98, 95% CI = 0.97-0.99, p = 0.049) were independent late-mortality predictors. Conclusions: BNP measured at hospital admission in patients with NSTEACS is a strong, independent predictor of very long-term all-cause mortality. This study allows raising the hypothesis that BNP should be measured in all patients with NSTEACS at the index event for long-term risk stratification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Magdeburg, Univ., Fak. für Verfahrens- und Systemtechnik, Diss., 2012

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays a huge attention of the academia and research teams is attracted to the potential of the usage of the 60 GHz frequency band in the wireless communications. The use of the 60GHz frequency band offers great possibilities for wide variety of applications that are yet to be implemented. These applications also imply huge implementation challenges. Such example is building a high data rate transceiver which at the same time would have very low power consumption. In this paper we present a prototype of Single Carrier -SC transceiver system, illustrating a brief overview of the baseband design, emphasizing the most important decisions that need to be done. A brief overview of the possible approaches when implementing the equalizer, as the most complex module in the SC transceiver, is also presented. The main focus of this paper is to suggest a parallel architecture for the receiver in a Single Carrier communication system. This would provide higher data rates that the communication system canachieve, for a price of higher power consumption. The suggested architecture of such receiver is illustrated in this paper,giving the results of its implementation in comparison with its corresponding serial implementation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniques for maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables, and an approach for performing parallel addition of N input symbols.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniquesfor maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables,and an approach for performing parallel addition of N input symbols.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Magdeburg, Univ., Fak. für Naturwiss., Diss., 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advances in computer memory technology justify research towards new and different views on computer organization. This paper proposes a novel memory-centric computing architecture with the goal to merge memory and processing elements in order to provide better conditions for parallelization and performance. The paper introduces the architectural concepts and afterwards shows the design and implementation of a corresponding assembler and simulator.