979 resultados para RM(rate monotonic)algorithm
Resumo:
The present study proposes a dynamic constitutive material interface model that includes non-associated flow rule and high strain rate effects, implemented in the finite element code ABAQUS as a user subroutine. First, the model capability is validated with numerical simulations of unreinforced block work masonry walls subjected to low velocity impact. The results obtained are compared with field test data and good agreement is found. Subsequently, a comprehensive parametric analysis is accomplished with different joint tensile strengths and cohesion, and wall thickness to evaluate the effect of the parameter variations on the impact response of masonry walls.
Resumo:
Pressures on the Brazilian Amazon forest have been accentuated by agricultural activities practiced by families encouraged to settle in this region in the 1970s by the colonization program of the government. The aims of this study were to analyze the temporal and spatial evolution of land cover and land use (LCLU) in the lower Tapajós region, in the state of Pará. We contrast 11 watersheds that are generally representative of the colonization dynamics in the region. For this purpose, Landsat satellite images from three different years, 1986, 2001, and 2009, were analyzed with Geographic Information Systems. Individual images were subject to an unsupervised classification using the Maximum Likelihood Classification algorithm available on GRASS. The classes retained for the representation of LCLU in this study were: (1) slightly altered old-growth forest, (2) succession forest, (3) crop land and pasture, and (4) bare soil. The analysis and observation of general trends in eleven watersheds shows that LCLU is changing very rapidly. The average deforestation of old-growth forest in all the watersheds was estimated at more than 30% for the period of 1986 to 2009. The local-scale analysis of watersheds reveals the complexity of LCLU, notably in relation to large changes in the temporal and spatial evolution of watersheds. Proximity to the sprawling city of Itaituba is related to the highest rate of deforestation in two watersheds. The opening of roads such as the Transamazonian highway is associated to the second highest rate of deforestation in three watersheds.
Resumo:
ABSTRACTThe Amazon várzeas are an important component of the Amazon biome, but anthropic and climatic impacts have been leading to forest loss and interruption of essential ecosystem functions and services. The objectives of this study were to evaluate the capability of the Landsat-based Detection of Trends in Disturbance and Recovery (LandTrendr) algorithm to characterize changes in várzeaforest cover in the Lower Amazon, and to analyze the potential of spectral and temporal attributes to classify forest loss as either natural or anthropogenic. We used a time series of 37 Landsat TM and ETM+ images acquired between 1984 and 2009. We used the LandTrendr algorithm to detect forest cover change and the attributes of "start year", "magnitude", and "duration" of the changes, as well as "NDVI at the end of series". Detection was restricted to areas identified as having forest cover at the start and/or end of the time series. We used the Support Vector Machine (SVM) algorithm to classify the extracted attributes, differentiating between anthropogenic and natural forest loss. Detection reliability was consistently high for change events along the Amazon River channel, but variable for changes within the floodplain. Spectral-temporal trajectories faithfully represented the nature of changes in floodplain forest cover, corroborating field observations. We estimated anthropogenic forest losses to be larger (1.071 ha) than natural losses (884 ha), with a global classification accuracy of 94%. We conclude that the LandTrendr algorithm is a reliable tool for studies of forest dynamics throughout the floodplain.
Resumo:
Tese de Doutoramento em Engenharia Química e Biológica
Resumo:
PURPOSE: To assess the presence and the prevalence of arrhythmias and the variability of the heart rate in the medium-term postoperative period following the maze procedure for chronic atrial fibrillation (AF). METHODS: Seventeen patients with a mean age of 51.7±12.9 years, who previously underwent the maze procedure without cryoablation for chronic atrial fibrillation, were evaluated with the 24 hour electrocardiogram (ECG) - Holter monitoring from the 6th month after the operation. Valvular and coronary procedures were concomitantly performed. RESULTS: The mean heart rate during Holter monitoring was 82±8bpm; the maximal heart rate was 126 ± 23bpm and the minimal heart rate 57±7bpm. Sinus rhythm was found in 10 (59%) patients and atrial rhythm was found in 7 (41%). Supraventricular extrasystoles had a rate of 2.3±5.5% of the total number of heartbeats and occurred in 16 (94%) patients. Six (35%) patients showed nonsustained atrial tachycardia. Ventricular extrasystoles, with a rate of 0.8±0.5% of the total heartbeats, occurred in 14 (82%) patients. The chronotropic competence was normal in 9 (53%) patients and attenuated in 8 (47%). The atrioventricular conduction (AV) was unchanged in 13 (76%) patients and there were 4 (24%) cases of first degree atrioventricular block (AVB). CONCLUSION: After the maze procedure, the values for the mean heart rate, AV conduction and chronotropic competence approach the normal range, although some cases show attenuation of the chronotropic response, first degree AV block or benign arrhythmias.
Resumo:
Aims: To evaluate the differences in linear and complex heart rate dynamics in twin pairs according to fetal sex combination [male-female (MF), male-male (MM), and female-female (FF)]. Methods: Fourteen twin pairs (6 MF, 3 MM, and 5 FF) were monitored between 31 and 36.4 weeks of gestation. Twenty-six fetal heart rate (FHR) recordings of both twins were simultaneously acquired and analyzed with a system for computerized analysis of cardiotocograms. Linear and nonlinear FHR indices were calculated. Results: Overall, MM twins presented higher intrapair average in linear indices than the other pairs, whereas FF twins showed higher sympathetic-vagal balance. MF twins exhibited higher intrapair average in entropy indices and MM twins presented lower entropy values than FF twins considering the (automatically selected) threshold rLu. MM twin pairs showed higher intrapair differences in linear heart rate indices than MF and FF twins, whereas FF twins exhibited lower intrapair differences in entropy indices. Conclusions: The results of this exploratory study suggest that twins have sex-specific differences in linear and nonlinear indices of FHR. MM twins expressed signs of a more active autonomic nervous system and MF twins showed the most active complexity control system. These results suggest that fetal sex combination should be taken into consideration when performing detailed evaluation of the FHR in twins.
Resumo:
Fetal movements and fetal heart rate (FHR) are well-established markers of fetal well-being and maturation of the fetal central nervous system. The purpose of this paper is to review and discuss the available knowledge on fetal movements and heart rate patterns in twin pregnancies. There is some evidence for an association or similarity in fetal movement incidences or FHR patterns between both members of twin pairs. However, the temporal occurrence of these patterns seems to be for the most part asynchronous, especially when stricter criteria are used to define synchrony. The available data suggest that fetal behavior is largely independent of sex combination, fetal position, and presentation. Conversely, chorionicity appears to have some influence on fetal behavior, mainly before 30 weeks of gestation. There is preliminary evidence for the continuity of inter-individual differences in fetal activity and FHR patterns over pregnancy. Comparisons between studies are limited by large methodological differences and absence of uniform concepts and definitions. Future studies with high methodological quality are needed to provide a more comprehensive knowledge of normal fetal behavior in twin pregnancy.
Resumo:
The severe economic downturn that followed the Global Financial Crisis of 2007 was accompanied by major fluctuations in the labour market. During the Great Recession the rate of job destruction was such that, by 2013, active population was at levels of 1999; employment levels were at an historical minimum; and the unemployment rate soared to 17,5%. This chapter inspects the dynamics behind the aggregate fl uctuations in the labour market and studies the determinants of mobility within (promotions) and between fi rms, and whether these have changed during crisis, using Portuguese (LEED) data. During crisis women became more likely to make between- rm moves with short gaps of unemployment and less likely to find a new job after a long gap or to make a job-to-non-employment transition. More educated workers are less likely to experience between fi rm job mobility, both before and during crisis, and became less likely to make job-to-non-employment transitions during crisis. Young workers are the group that most suffered from crisis: they became less likely to make job-to-job transitions and their hazard of experiencing a transition into unemployment shoot up.
Resumo:
The decision support models in intensive care units are developed to support medical staff in their decision making process. However, the optimization of these models is particularly difficult to apply due to dynamic, complex and multidisciplinary nature. Thus, there is a constant research and development of new algorithms capable of extracting knowledge from large volumes of data, in order to obtain better predictive results than the current algorithms. To test the optimization techniques a case study with real data provided by INTCare project was explored. This data is concerning to extubation cases. In this dataset, several models like Evolutionary Fuzzy Rule Learning, Lazy Learning, Decision Trees and many others were analysed in order to detect early extubation. The hydrids Decision Trees Genetic Algorithm, Supervised Classifier System and KNNAdaptive obtained the most accurate rate 93.2%, 93.1%, 92.97% respectively, thus showing their feasibility to work in a real environment.
Resumo:
Tese de Doutoramento em Engenharia Industrial e de Sistemas.
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme
Resumo:
Magdeburg, Univ., Diss., Med. Fak., 2011
Resumo:
Background: When performing the Valsalva maneuver (VM), adults and preadolescents produce the same expiratory resistance values. Objective: To analyze heart rate (HR) in preadolescents performing VM, and propose a new method for selecting expiratory resistance. Method: The maximal expiratory pressure (MEP) was measured in 45 sedentary children aged 9-12 years who subsequently performed VM for 20 s using an expiratory pressure of 60%, 70%, or 80% of MEP. HR was measured before, during, and after VM. These procedures were repeated 30 days later, and the data collected in the sessions (E1, E2) were analyzed and compared in periods before, during (0-10 and 10-20 s), and after VM using nonparametric tests. Results: All 45 participants adequately performed VM in E1 and E2 at 60% of MEP. However, only 38 (84.4%) and 25 (55.5%) of the participants performed the maneuver at 70% and 80% of MEP, respectively. The HR delta measured during 0-10 s and 10-20 s significantly increased as the expiratory effort increased, indicating an effective cardiac autonomic response during VM. However, our findings suggest the VM should not be performed at these intensities. Conclusion: HR increased with all effort intensities tested during VM. However, 60% of MEP was the only level of expiratory resistance that all participants could use to perform VM. Therefore, 60% of MEP may be the optimal expiratory resistance that should be used in clinical practice.