960 resultados para Maximum Degree Proximity algorithm (MAX-DPA)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

ABSTRACTThe Amazon várzeas are an important component of the Amazon biome, but anthropic and climatic impacts have been leading to forest loss and interruption of essential ecosystem functions and services. The objectives of this study were to evaluate the capability of the Landsat-based Detection of Trends in Disturbance and Recovery (LandTrendr) algorithm to characterize changes in várzeaforest cover in the Lower Amazon, and to analyze the potential of spectral and temporal attributes to classify forest loss as either natural or anthropogenic. We used a time series of 37 Landsat TM and ETM+ images acquired between 1984 and 2009. We used the LandTrendr algorithm to detect forest cover change and the attributes of "start year", "magnitude", and "duration" of the changes, as well as "NDVI at the end of series". Detection was restricted to areas identified as having forest cover at the start and/or end of the time series. We used the Support Vector Machine (SVM) algorithm to classify the extracted attributes, differentiating between anthropogenic and natural forest loss. Detection reliability was consistently high for change events along the Amazon River channel, but variable for changes within the floodplain. Spectral-temporal trajectories faithfully represented the nature of changes in floodplain forest cover, corroborating field observations. We estimated anthropogenic forest losses to be larger (1.071 ha) than natural losses (884 ha), with a global classification accuracy of 94%. We conclude that the LandTrendr algorithm is a reliable tool for studies of forest dynamics throughout the floodplain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e de Computadores

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Civil

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To evaluate the behavior of blood pressure during exercise in patients with hypertension controlled by frontline antihypertension drugs. METHODS: From 979ergometric tests we retrospectively selected 49 hipertensive patients (19 males). The age was 53±12 years old and normal range rest arterial pressure (<=140/90 mmHg) all on pharmacological monotherapy. There were 12 on beta blockers; 14 on calcium antagonists, 13 on diuretics and 10 on angiotensin converting enzyme inhibitor. Abnormal exercise behhavior of blood pressure was diagnosed if anyone of the following criteria was detected: peak systolic pressure above 220 mmHg, raising of systolic pressure > or = 10 mmHg/MET; or increase of diastolic pressure greater than 15 mmHg. RESULTS: Physiologic response of arterial blood pressure occurred in 50% of patients on beta blockers, the best one (p<0.05), in 36% and 31% on calcium antagonists and on diuretics, respectively, and in 20% on angiotensin converting enzyme inhibitor, the later the leastr one (p<0.05). CONCLUSION: Beta-blockers were more effective than calcium antagonists, diuretics and angiotensin-converting enzyme inhibitors in controlling blood pressure during exercise, and angiotensin converting enzyme inhibitors the least effective drugs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent decades, an increased interest has been evidenced in the research on multi-scale hierarchical modelling in the field of mechanics, and also in the field of wood products and timber engineering. One of the main motivations for hierar-chical modelling is to understand how properties, composition and structure at lower scale levels may influence and be used to predict the material properties on a macroscopic and structural engineering scale. This chapter presents the applicability of statistic and probabilistic methods, such as the Maximum Likelihood method and Bayesian methods, in the representation of timber’s mechanical properties and its inference accounting to prior information obtained in different importance scales. These methods allow to analyse distinct timber’s reference properties, such as density, bending stiffness and strength, and hierarchically consider information obtained through different non, semi or destructive tests. The basis and fundaments of the methods are described and also recommendations and limitations are discussed. The methods may be used in several contexts, however require an expert’s knowledge to assess the correct statistic fitting and define the correlation arrangement between properties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: The 6-minute walk test is an way of assessing exercise capacity and predicting survival in heart failure. The 6-minute walk test was suggested to be similar to that of daily activities. We investigated the effect of motivation during the 6-minute walk test in heart failure. METHODS: We studied 12 males, age 45±12 years, ejection fraction 23±7%, and functional class III. Patients underwent the following tests: maximal cardiopulmonary exercise test on the treadmill (max), cardiopulmonary 6-minute walk test with the walking rhythm maintained between relatively easy and slightly tiring (levels 11 and 13 on the Borg scale) (6EB), and cardiopulmonary 6-minute walk test using the usual recommendations (6RU). The 6EB and 6RU tests were performed on a treadmill with zero inclination and control of the velocity by the patient. RESULTS: The values obtained in the max, 6EB, and 6RU tests were, respectively, as follows: O2 consumption (ml.kg-1.min-1) 15.4±1.8, 9.8±1.9 (60±10%), and 13.3±2.2 (90±10%); heart rate (bpm) 142±12, 110±13 (77±9%), and 126±11 (89±7%); distance walked (m) 733±147, 332±66, and 470±48; and respiratory exchange ratio (R) 1.13±0.06, 0.9±0.06, and 1.06±0.12. Significant differences were observed in the values of the variables cited between the max and 6EB tests, the max and 6RU tests, and the 6EB and 6RU tests (p<0.05). CONCLUSION: Patients, who undergo the cardiopulmonary 6-minute walk test and are motivated to walk as much as they possibly can, usually walk almost to their maximum capacity, which may not correspond to that of their daily activities. The use of the Borg scale during the cardiopulmonary 6-minute walk test seems to better correspond to the metabolic demand of the usual activities in this group of patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To suggest criteria to guide protocol prescription in ramp treadmill testing, according to sex and age, based on velocity, inclination, and max VO2 reached by the population studied. METHODS: Prospective study describing heart rate (HR), time, velocity, inclination, and VO2 estimated at maximum effort of 1840 individuals from 4 to 79 years old, who performed a treadmill test (TT) according to the ramp protocol. A paired Student t test was used to assess the difference between predicted and reached max VO2, calculated according to the formulas of the "American College of Sports Medicine". RESULTS: Submaximal HR was surpassed in 90.1% of the examinations, with a mean time of 10.0±2.0 minute. Initial and peak inclination velocity of the exercise and max VO2 were inversely proportional to age and were greater in male patients. Predicted Max VO2 was significantly lower than that reached in all patients, except for female children and adolescents (age < 20 years old). CONCLUSION: Use of velocity, inclination, and maximum VO2 actually reached, as a criterion in prescribing the ramp protocol may help in the performance of exercise in treadmill testing. The ramp protocol was well accepted in all age groups and sexes with exercise time within the programmed 8 to 12 minutes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El proyecto tiene como propósito caracterizar la variabilidad de la paleocirculación atmosférica en las latitudes medias de Sudamérica, su efecto sobre la fluctuación hidroclimática regional y la vulnerabilidad humana frente a los cambios ocurridos desde el Ultimo Máximo Glacial/Holoceno. El enfoque inter y multidisciplinaro aquí planteado para analizar la varibiliad hidroclimática pasada, sus causas y consecuencias, es inédito para esta región del país. El mismo contempla: a) análisis de archivos climáticos sedimentarios con una aproximación de multi-indicadores (sedimentología, geoquímica, isótopos estables y radiogénicos, mineralogía, ostrácodos y moluscos); b) determinación de la dinámica actual y pasada del polvo atmosférico (PA) combinando mediciones in situ y en registros sedimentarios y c) análisis de restos óseos humanos y malacológicos en sitios arqueológicos.Se contempla: a) Efectuar análisis de multi-indicadores de registros climáticos naturales almacenados en sistemas lacustres de la región Pampeana (S. Ambargasta, Mar Chiquita, Pocho, Melincué, Lagunas Encadenadas del Oeste de Buenos Aires) y en secuencias loessicas para inferir la variabilidad de la circulación atmosférica desde el UMG; b) Ampliar la resolución temporal de las reconstrucciones climáticas para ventanas de tiempo seleccionadas; c) Analizar la señal geoquímica del registro sedimentario de fases climáticas contrastantes; d) Identificar la variabilidad temporal de la procedencia y de los procesos actuantes mediante análisis mineralógicos y geoquímicos; e) Analizar el ambiente actual para calibrar indicadores ambientales o proxies (isótopos, flujo de sedimentos, geoquímica, moluscos y ostrácodos) con el escenario climático contemporáneo; f) Analizar en conjunto los archivos climáticos para inferir patrones de paleocirculación atmosférica regional y g) Dilucidar estrategias adaptativas y la historia biológica de poblaciones humanas en la región central de Argentina durante fases climáticas diversas.Este proyecto aborda uno de los aspectos menos conocidos de las reconstrucciones paleoambientales, que está relacionado con rol del material eólico derivado del Hemisferio Sur y el impacto que genera sobre el ciclo regional del Carbono. A pesar que el sur de Sudamérica es una de las áreas claves para entender este aspecto, no se conoce de forma acabada la incidencia de los cambios ambientales sobre el flujo de PA o el efecto de futuros cambios climáticos y/o uso de la tierra.La actividad planteada tiene implicancias directas sobre múltiples disciplinas como las ciencias atmosféricas, geoquímica, sedimentología, paleoclimatologia y bioarqueología. Nuestros resultados permitirán mejorar el entendimiento del cambio climático regional, la dinámica del polvo y su rol como forzante del sistema climático, la variabilidad hidrológica presente y pasada y la respuesta por parte de las poblaciones humanas. Profundizar el estudio de los cambios paleoclimáticos y bioarqueológicos en la región permitirá analizar la variabilidad hidroclimática y determinar su relación con las situaciones de crisis y vulnerabilidad del pobamiento humano. Asimismo, la inferencia de cambios para períodos con mínima o sin influencia humana es una herramienta clave para mejorar el conocimiento de las fluctuaciones climáticas del área extratropical Sudamericana. Estos resultados permitirán analizar no sólo los mecanismos operados en el sistema climático pasado sino también aquellos factores que explicarían el gran cambio hidroclimático registrado desde 1970 cuyos efectos han impactado claramente sobre las actividades socio-económicos en la región central Argentina.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The vulnerability to pollution and hydrochemical variation of groundwater in the mid-west karstic lowlands of Ireland were investigated from October 1992 to September 1993, as part of an EU STRIDE project at Sligo Regional Technical College. Eleven springs were studied in the three local authority areas of Co. Galway, Co. Mayo, and Co. Roscommon. Nine of the springs drain locally or regionally important karstic aquifers and two drain locally important sand and gravel aquifers. The maximum average daily discharge of any of the springs was 16,000 m3/day. Determination of the vulnerability of groundwater to pollution relies heavily on an examination of subsoil deposits in an area since they can act as a protecting or filtering layer over groundwater. Within aquifers/spring catchments, chemical reactions such as adsorption, solution-precipitation or acid-base reactions occur and modify the hydrochemistry of groundwater (Lloyd and Heathcote, 1985). The hydrochemical processes) that predominate depend cm the mineralogy of the aquifer, the hydrogeological environment, the overlying subsoils, and the history of groundwater movement. The aim of this MSc research thesis was to investigate the hydrochemical variation of spring outflow and to assess the relationship between these variations and the intrinsic vulnerability of the springs and their catchments. If such a relationship can be quantified, then it is hoped that the hydrochemical variation of a spring may indicate the vulnerability of a spring catchment without the need for determining it by field mapping. Such a method would be invaluable to any of the three local authorities since they would be able to prioritise sources that are most at risk from pollution, using simple techniques of chemical sampling, and statistical analysis. For each spring a detailed geological, hydrogeological and hydrochemical study was carried out. Individual catchment areas were determined with a water balance/budget and groundwater tracing. The subsoils geology for each spring catchment were mapped at the 1:10,560 scale and digitised to the 1:25,000 scale with AutoCad™ and Arclnfo™. The vulnerability of each spring was determined using the Geological Survey's vulnerability guidelines. Field measurements and laboratory based chemistry analyses of the springs were undertaken by personnel from both the EPA Regional Laboratory in Castlebar, Co. Mayo, and the Environment Section of Roscommon Co. Council. Electrical conductivity and temperature (°C) were sampled fortnightly, in the field, using a WTW microprocessor conductivity meter. A percentage (%) vulnerability was applied to each spring in order to indicate the areal extent of the four main classes of vulnerability (Extreme, High, Moderate, and Low) which occurred within the confines of each spring catchment. Hydrochemical variation for the springs were presented as the coefficient of variation of electrical conductivity. The results of this study show that a clear relationship exists between the degree of vulnerability of each catchment area as defined by the subsoil cover and the coefficient of variation of EC, with the coefficient of variation increasing as the vulnerability increases. The coefficient of variation of electrical conductivity is considered to be a parameter that gives a good general reflection of the degree of vulnerability occurring in a spring catchment in Ireland's karstic lowlands.