994 resultados para Correlation algorithm
Resumo:
This article presents Monte Carlo techniques for estimating network reliability. For highly reliable networks, techniques based on graph evolution models provide very good performance. However, they are known to have significant simulation cost. An existing hybrid scheme (based on partitioning the time space) is available to speed up the simulations; however, there are difficulties with optimizing the important parameter associated with this scheme. To overcome these difficulties, a new hybrid scheme (based on partitioning the edge set) is proposed in this article. The proposed scheme shows orders of magnitude improvement of performance over the existing techniques in certain classes of network. It also provides reliability bounds with little overhead.
Resumo:
A Combined Genetic Algorithm and Method of Moments design methods is presented for the design of unusual near-field antennas for use in Magnetic Resonance Imaging systems. The method is successfully applied to the design of an asymmetric coil structure for use at 190MHz and demonstrates excellent radiofrequency field homogeneity.
Resumo:
B-type natriuretic peptide (BNP) levels increase in systolic heart failure (HF). However, the value of BNP in hypertensive patients with suspected diastolic HF (symptoms suggestive of HF but normal ejection fraction) and its relation to myocardial function in these patients is unclear. We prospectively studied 72 ambulatory hypertensive subjects (40 women, mean age 58 +/- 8 years) with exertional dyspnea and ejection fraction greater than or equal to50%. Diastolic function was evaluated with transmitral and pulmonary venous Doppler, mitral annular velocities (pulsed-wave tissue Doppler), and flow propagation velocity (color M-mode). Systolic function was assessed with strain and strain rate derived from color tissue Doppler imaging. BNP was related to myocardial function and the presence or absence of global diastolic dysfunction. By conventional Doppler criteria, 34 patients had normal left ventricular diastolic function and 38 had isolated diastolic dysfunction. BNP values were higher in patients with diastolic dysfunction (46 +/- 48 vs 20 +/- 20 pg/ml, p = 0.004) and were related independently to blood pressure, systolic strain rate, left atrial function (p < 0.01 for all), and age (p = 0.015). Patients with diastolic dysfunction and pseudonormal filling had higher BNP levels compared with impaired relaxation (89 +/- 47 vs 35 +/- 42 pg/ml, p = 0.001). However, 79% of patients with diastolic dysfunction had BNP levels within the normal range. We conclude that in ambulatory hypertensive patients with symptoms suggestive of mild HF and normal ejection fraction, BNP is related to atrial and ventricular systolic parameters, blood pressure, and age. Although elevated in the presence of diastolic dysfunction, the BNP level mostly is in the normal range and, therefore, has limited diagnostic value in stable patients with suspected diastolic HF. (C) 2003 by Excerpta Medica, Inc.
Resumo:
Background and aims: Hip fracture is a devastating event in terms of outcome in the elderly, and the best predictor of hip fracture risk is hip bone density, usually measured by dual X-ray absorptiometry (DXA). However, bone density can also be ascertained from computerized tomography (CT) scans, and mid-thigh scans are frequently employed to assess the muscle and fat composition of the lower limb. Therefore, we examined if it was possible to predict hip bone density using mid-femoral bone density. Methods: Subjects were 803 ambulatory white and black women and men, aged 70-79 years, participating in the Health, Aging and Body Composition (Health ABC) Study. Bone mineral content (BMC, g) and volumetric bone mineral density (vBMD, mg/cm(3)) of the mid-femur were obtained by CT, whereas BMC and areal bone mineral density (aBMD, g/cm(2)) of the hip (femoral neck and trochanter) were derived from DXA. Results: In regression analyses stratified by race and sex, the coefficient of determination was low with mid-femoral BMC, explaining 6-27% of the variance in hip BMC, with a standard error of estimate (SEE) ranging from 16 to 22% of the mean. For mid-femur vBMD, the variance explained in hip aBMD was 2-17% with a SEE ranging from 15 to 18%. Adjusting aBMD to approximate volumetric density did not improve the relationships. In addition, the utility of fracture prediction was examined. Forty-eight subjects had one or more fractures (various sites) during a mean follow-up of 4.07 years. In logistic regression analysis, there was no association between mid-femoral vBMD and fracture (all fractures), whereas a 1 SD increase in hip BMD was associated with reduced odds for fracture of similar to60%. Conclusions: These results do not support the use of CT-derived mid-femoral vBMD or BMC to predict DXA-measured hip bone mineral status, irrespective of race or sex in older adults. Further, in contrast to femoral neck and trochanter BMD, mid-femur vBMD was not able to predict fracture (all fractures). (C) 2003, Editrice Kurtis.
Resumo:
Many organisations need to extract useful information from huge amounts of movement data. One example is found in maritime transportation, where the automated identification of a diverse range of traffic routes is a key management issue for improving the maintenance of ports and ocean routes, and accelerating ship traffic. This paper addresses, in a first stage, the research challenge of developing an approach for the automated identification of traffic routes based on clustering motion vectors rather than reconstructed trajectories. The immediate benefit of the proposed approach is to avoid the reconstruction of trajectories in terms of their geometric shape of the path, their position in space, their life span, and changes of speed, direction and other attributes over time. For clustering the moving objects, an adapted version of the Shared Nearest Neighbour algorithm is used. The motion vectors, with a position and a direction, are analysed in order to identify clusters of vectors that are moving towards the same direction. These clusters represent traffic routes and the preliminary results have shown to be promising for the automated identification of traffic routes with different shapes and densities, as well as for handling noise data.
Resumo:
As with any variety of rice, red rice characteristics are subject to varietal differences, growing conditions, types of processing, and nutritional and rheological properties. This study determined the nutritional characteristics (centesimal composition and minerals) and paste viscosity properties of raw grains of four red rice genotypes (Tradicional MNAPB0405, MNACE0501 and MNACH0501) and the paste viscosity properties of pre-gelatinized flours obtained at different cooking times (20, 30 and 40 min). The main nutritional properties were correlated with the pasting properties of the pre-gelatinized flours. The samples showed differences in nutritional properties and paste viscosity. MNAPB0405 and MNACE0501 showed higher levels of fiber and fat and provided higher caloric energy than Tradicional and MNACH0501, which, in turn, showed higher levels of amylose. MNACH0501 showed higher peak viscosity (2402 cP), higher breakdown viscosity (696 cP) and a greater tendency to retrogradation (1510 cP), while Tradicional, MNAPB0405 and MNACE0501 had pasting profiles with peak viscosities varying between 855 and 1093 cP, breaking viscosity below 85 cP and retrogradation tendency between 376 and 1206 cP. The factors genotype and cooking time influenced the rheological behavior of pre-gelatinized flours, decreasing their pasting properties. The protein and amylose levels are correlated with the pasting properties and can be used as indicators of these properties in different genotypes of red rice, whether raw or processed into pre-gelatinized flours.
Resumo:
Hand and finger tracking has a major importance in healthcare, for rehabilitation of hand function required due to a neurological disorder, and in virtual environment applications, like characters animation for on-line games or movies. Current solutions consist mostly of motion tracking gloves with embedded resistive bend sensors that most often suffer from signal drift, sensor saturation, sensor displacement and complex calibration procedures. More advanced solutions provide better tracking stability, but at the expense of a higher cost. The proposed solution aims to provide the required precision, stability and feasibility through the combination of eleven inertial measurements units (IMUs). Each unit captures the spatial orientation of the attached body. To fully capture the hand movement, each finger encompasses two units (at the proximal and distal phalanges), plus one unit at the back of the hand. The proposed glove was validated in two distinct steps: a) evaluation of the sensors’ accuracy and stability over time; b) evaluation of the bending trajectories during usual finger flexion tasks based on the intra-class correlation coefficient (ICC). Results revealed that the glove was sensitive mainly to magnetic field distortions and sensors tuning. The inclusion of a hard and soft iron correction algorithm and accelerometer and gyro drift and temperature compensation methods provided increased stability and precision. Finger trajectories evaluation yielded high ICC values with an overall reliability within application’s satisfying limits. The developed low cost system provides a straightforward calibration and usability, qualifying the device for hand and finger tracking in healthcare and animation industries.
Resumo:
Quantitative analysis of cine cardiac magnetic resonance (CMR) images for the assessment of global left ventricular morphology and function remains a routine task in clinical cardiology practice. To date, this process requires user interaction and therefore prolongs the examination (i.e. cost) and introduces observer variability. In this study, we sought to validate the feasibility, accuracy, and time efficiency of a novel framework for automatic quantification of left ventricular global function in a clinical setting.
Resumo:
Based on our recent discovery of closed form formulae of efficient Mean Variance retentions in variable quota-share proportional reinsurance under group correlation, we analyzed the influence of different combination of correlation and safety loading levels on the efficient frontier, both in a single period stylized problem and in a multiperiod one.
Resumo:
OBJECTIVE: It is an accepted fact that confinement conditions increase the risk of some infections related to sexual and/or injecting drugs practices. Mathematical techniques were applied to estimate time-dependent incidence densities of HIV infection among inmates. METHODS: A total of 631 prisoners from a Brazilian prison with 4,900 inmates at that time were interviewed and their blood drawn. Risky behavior for HIV infection was analyzed, and serological tests for HIV, hepatitis C and syphilis were performed, intended as surrogates for parenteral and sexual HIV transmission, respectively. Mathematical techniques were used to estimate the incidence density ratio, as related to the time of imprisonment. RESULTS: Prevalence were: HIV -- 16%; HCV -- 34%; and syphilis -- 18%. The main risk behaviors related to HIV infection were HCV prevalence (OR=10.49) and the acknowledged use of injecting drugs (OR=3.36). Incidence density ratio derivation showed that the risk of acquiring HIV infection increases with the time of imprisonment, peaking around three years after incarceration. CONCLUSIONS: The correlation between HIV and HCV seroprevalence and the results of the mathematical analysis suggest that HIV transmission in this population is predominantly due to parenteral exposure by injecting drug, and that it increases with time of imprisonment.
Resumo:
5th. European Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS 2008) 8th. World Congress on Computational Mechanics (WCCM8)
Resumo:
The advances made in channel-capacity codes, such as turbo codes and low-density parity-check (LDPC) codes, have played a major role in the emerging distributed source coding paradigm. LDPC codes can be easily adapted to new source coding strategies due to their natural representation as bipartite graphs and the use of quasi-optimal decoding algorithms, such as belief propagation. This paper tackles a relevant scenario in distributedvideo coding: lossy source coding when multiple side information (SI) hypotheses are available at the decoder, each one correlated with the source according to different correlation noise channels. Thus, it is proposed to exploit multiple SI hypotheses through an efficient joint decoding technique withmultiple LDPC syndrome decoders that exchange information to obtain coding efficiency improvements. At the decoder side, the multiple SI hypotheses are created with motion compensated frame interpolation and fused together in a novel iterative LDPC based Slepian-Wolf decoding algorithm. With the creation of multiple SI hypotheses and the proposed decoding algorithm, bitrate savings up to 8.0% are obtained for similar decoded quality.
Resumo:
This paper presents an algorithm to efficiently generate the state-space of systems specified using the IOPT Petri-net modeling formalism. IOPT nets are a non-autonomous Petri-net class, based on Place-Transition nets with an extended set of features designed to allow the rapid prototyping and synthesis of system controllers through an existing hardware-software co-design framework. To obtain coherent and deterministic operation, IOPT nets use a maximal-step execution semantics where, in a single execution step, all enabled transitions will fire simultaneously. This fact increases the resulting state-space complexity and can cause an arc "explosion" effect. Real-world applications, with several million states, will reach a higher order of magnitude number of arcs, leading to the need for high performance state-space generator algorithms. The proposed algorithm applies a compilation approach to read a PNML file containing one IOPT model and automatically generate an optimized C program to calculate the corresponding state-space.
Resumo:
Solubility measurements of quinizarin. (1,4-dihydroxyanthraquinone), disperse red 9 (1-(methylamino) anthraquinone), and disperse blue 14 (1,4-bis(methylamino)anthraquinone) in supercritical carbon dioxide (SC CO2) were carried out in a flow type apparatus, at a temperature range from (333.2 to 393.2) K and at pressures from (12.0 to 40.0) MPa. Mole fraction solubility of the three dyes decreases in the order quinizarin (2.9 x 10(-6) to 2.9.10(-4)), red 9 (1.4 x 10(-6) to 3.2 x 10(-4)), and blue 14 (7.8 x 10(-8) to 2.2 x 10(-5)). Four semiempirical density based models were used to correlatethe solubility of the dyes in the SC CO2. From the correlation results, the total heat of reaction, heat of vaporization plus the heat of solvation of the solute, were calculated and compared with the results presented in the literature. The solubilities of the three dyes were correlated also applying the Soave-Redlich-Kwong cubic equation of state (SRK CEoS) with classical mixing rules, and the physical properties required for the modeling were estimated and reported.
Resumo:
In recent years the use of several new resources in power systems, such as distributed generation, demand response and more recently electric vehicles, has significantly increased. Power systems aim at lowering operational costs, requiring an adequate energy resources management. In this context, load consumption management plays an important role, being necessary to use optimization strategies to adjust the consumption to the supply profile. These optimization strategies can be integrated in demand response programs. The control of the energy consumption of an intelligent house has the objective of optimizing the load consumption. This paper presents a genetic algorithm approach to manage the consumption of a residential house making use of a SCADA system developed by the authors. Consumption management is done reducing or curtailing loads to keep the power consumption in, or below, a specified energy consumption limit. This limit is determined according to the consumer strategy and taking into account the renewable based micro generation, energy price, supplier solicitations, and consumers’ preferences. The proposed approach is compared with a mixed integer non-linear approach.