980 resultados para Algorithm efficiency
Resumo:
The barrier effect and the performance of an organic–inorganic hybrid (OIH) sol–gel coating are highlydependent on the coating deposition method as well as processing conditions. In this work, studies onthe influence of experimental parameters using the dip coating method were performed. Factors suchas residence time (Rt), a curing step between each dip step and the number of layers of sol–gel OIHfilms deposited on HDGS to prevent corrosion in highly alkaline environments were studied. These OIHcoatings were obtained using a functionalized siloxane, 3-isociantepropyltriethoxysilane that reactedwith a diamino-functionalized oligopolymer (Jeffamine®D-230). The barrier efficiency of OIH coatings insimulated concrete pore solutions (SCPS) was assessed in the first moments of contact, by electrochemicalimpedance spectroscopy and potentiodynamic methods. The durability and stability of the OIH coatings inSCPS was monitored during eight days by macrocell current density. The morphological characterizationof the surface was performed by scanning electronic microscopy before and after exposure to SCPS.Glow discharge optical emission spectroscopy was used to obtain quantitative composition profiles toinvestigate the thickness of the OIH coatings as a function of the number of layers deposited and theinfluence of the Rt in the coating thickness.
Resumo:
ABSTRACTThe Amazon várzeas are an important component of the Amazon biome, but anthropic and climatic impacts have been leading to forest loss and interruption of essential ecosystem functions and services. The objectives of this study were to evaluate the capability of the Landsat-based Detection of Trends in Disturbance and Recovery (LandTrendr) algorithm to characterize changes in várzeaforest cover in the Lower Amazon, and to analyze the potential of spectral and temporal attributes to classify forest loss as either natural or anthropogenic. We used a time series of 37 Landsat TM and ETM+ images acquired between 1984 and 2009. We used the LandTrendr algorithm to detect forest cover change and the attributes of "start year", "magnitude", and "duration" of the changes, as well as "NDVI at the end of series". Detection was restricted to areas identified as having forest cover at the start and/or end of the time series. We used the Support Vector Machine (SVM) algorithm to classify the extracted attributes, differentiating between anthropogenic and natural forest loss. Detection reliability was consistently high for change events along the Amazon River channel, but variable for changes within the floodplain. Spectral-temporal trajectories faithfully represented the nature of changes in floodplain forest cover, corroborating field observations. We estimated anthropogenic forest losses to be larger (1.071 ha) than natural losses (884 ha), with a global classification accuracy of 94%. We conclude that the LandTrendr algorithm is a reliable tool for studies of forest dynamics throughout the floodplain.
Resumo:
This article presents results of an experimental investigation on the resistance to chemical attack (with sulphuric, hydrochloric and nitric acid) of several materials: OPC concrete, high-performance concrete, epoxy resin, acrylic painting and a fly ash-based geopolymeric mortar). Three types of acids with three high concentrations (10, 20 and 30%) were used to simulate long-term degradation. A cost analysis was also performed. The results show that the epoxy resin has the best resistance to chemical attack independently of the acid type and the acid concentration. However, the cost analysis shows that the epoxy resin-based solution is the least cost-efficient solution being 70% above the cost efficiency of the fly ash-based geopolymeric mortar.
Resumo:
Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e de Computadores
Resumo:
Nowadays cities are facing several environmental problems due to the population migration to urban areas, which is causing urban sprawl. This way, it is very important to define solutions to improve Land Use Efficiency (LUE). This article proposes the use of community buildings features as a solution to increase land use efficiency. Community buildings consider the design of shared building spaces to reduce the floor area of buildings. This work tests the performance of some case-study buildings regarding LUE to analyse its possible pros and cons. A quantifiable method is used to assess buildingsâ LUE, which considers the number of occupants, the gross floor area, the functional area, the implantation area and the allotment area. Buildings with higher values for this index have reduced environmental impacts because they use less construction materials, produce less construction and demolition wastes and require less energy for building operation. The results showed that the use of community building features can increase Land Use Efficiency of buildings.
Resumo:
Dissertação de mestrado integrado em Engenharia Civil
Resumo:
Dissertação de mestrado integrado em Engenharia Civil
Resumo:
Supplemental data for this article can be accessed at http://dx.doi.org/10.1080/07900627.2015.1070091. It includes an easy-to-use spreadsheet that calculates the efficiencies used in this paper, that is Sefficiency with energy considerations.
Resumo:
Tese de Doutoramento em Engenharia Civil.
Resumo:
La producción de leche de cabra es considerada en nuestro país, y en la provincia de Córdoba, una alternativa productiva para el desarrollo sustentable y socio – económico de la población. Por otra parte, existe una mayor demanda del mercado nacional e internacional de esta leche, por lo que los productores deben garantizar la seguridad y calidad de la misma de acuerdo a las normas vigentes. Es por ello que el control y tratamiento de las diferentes enfermedades es de vital importancia tanto para maximizar la producción del hato como para cumplir con los cánones de seguridad exigidos. En este contexto la mastitis caprina es una de las enfermedades que afecta la productividad del sector, y para controlarla una de las medidas a emplear es la terapéutica con antimicrobianos. Se trabajará en este proyecto con marbofloxacina y cefquinoma, estableciendo pautas racionales (eficaces y seguras) para su empleo en la afección a nivel regional. Los indicadores de eficacia estarán fijados de acuerdo a los parámetros integrados de farmacocinética (FC) y farmacodinamia (FD). Estos últimos (FD) serán calculados a través de la determinación concentraciones inhibitorias mínimas de cepas bacterianas aisladas de mastitis caprinas en Córdoba. Se establecerán los parámetros farmacocinéticos a dosis únicas y múltiples para la marbofloxacina (5 mg/kg IV, IM) y cefquinoma (2 mg/kg IV, IM e IMM) a partir de muestras de suero y leche de cabras Anglo Nubian (n = 6 por antimicrobiano; diseño cruzado en función de la ruta de administración). Se determinarán sus concentraciones en dichos fluidos, por cromatografía líquida de alta precisión. Los resultados FC/FD para ambos medicamentos se compararán con parámetros recomendados por expertos para cada tipo de antimicrobiano y se utilizarán como medida para recomendar una terapéutica racional, fundamental para optimizar la posología, garantizar la eficacia clínica, y reducir al mínimo la selección y propagación de cepas resistentes de agentes patógenos. The production of milk of goat is considered the province of Cordoba, a productive alternative for the sustainable development and partner - economically of the population. There is a major demand of the domestic and international market of this milk, for what the producers must guarantee the safety and quality of the same one of agreement to the in force procedure. It is for it that the control and treatment of the different diseases performs vital importance so much to maximize the production of the herd as to expire with the safety demanded. In this context the mastitis goat is one of the diseases that affect the productivity of the sector, and to control her one of the measures to using is the therapeutics with antimicrobial. One will be employed at this project with marbofloxacine and cefquinome, establishing rational guidelines (effective and sure) for his employment in the affection to regional level. The indicators of efficiency will be fixed in agreement to the integrated parameters of pharmacokinetics (PK) and pharmacodinamics (PD). The latter (PK) will be calculated across the determination inhibitory minimal concentrations of bacterial strains isolated of mastitis goat in Córdoba. The parameters will be established pharmacokinetics to the only and multiple doses for the marbofloxacine (5 mg/kg the IV, IM) and cefquinome (2 mg/kg the IVth, IM and IMM), From samples of whey and milk of goats Anglo Nubian (n = 6 for antimicrobial; design crossed depending on the route of administration). Its concentrations will decide in the above mentioned fluids, for liquid chromatography of high precision. The results PK/PD for both antimicrobial will be compared with parameters recommended by experts for every type of antimicrobial and will be in use as measure for recommending a rational, fundamental therapeutics for optimizing the dosage, for guaranteeing the clinical efficiency, and to reduce to the minimum the selection and spread of resistant of pathogenic agents.
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme
Resumo:
There are presently over 182 RBC plants, treating domestic wastewater, in the Republic of Ireland, 136 of which have been installed since 1986. The use of this treatment plant technology, although not new, is becoming increasingly popular. The aim of this research was to assess the effects that a household detergent has on rotating biological contractor treatment plant efficiency. Household detergents contribute phosphorus to the surrounding environment and can also remove beneficial biomass from the disc media. A simple modification was made to a conventional flat disc unit to increase the oxygen transfer of the process. The treatment efficiency of the modified RBC (with aeration cups attached) was assessed against a parallel conventional system, with and without degergent loading. The parameters monitored were chemical oxygen demand (COD), bio-chemical oxygen demand (BOD), nitrates, phosphates, dissolved oxygen, the motors power consumption, pH, and temperature. Some microscopic analysis of the biofilm was also to be carried out. The treatment efficiency of both units was compared, based on COD/BOD removal. The degree of nitrification achievable by both units was also assessed with any fluctuations in pH noted. Monitoring of the phosphorus removal capabilities of both units was undertaken. Relationships between detergent concentrations and COD removal efficiencies were also analysed.
Resumo:
Magdeburg, Univ., Fak. für Verfahrens- und Systemtechnik, Diss., 2011
Resumo:
Background:Vascular remodeling, the dynamic dimensional change in face of stress, can assume different directions as well as magnitudes in atherosclerotic disease. Classical measurements rely on reference to segments at a distance, risking inappropriate comparison between dislike vessel portions.Objective:to explore a new method for quantifying vessel remodeling, based on the comparison between a given target segment and its inferred normal dimensions.Methods:Geometric parameters and plaque composition were determined in 67 patients using three-vessel intravascular ultrasound with virtual histology (IVUS-VH). Coronary vessel remodeling at cross-section (n = 27.639) and lesion (n = 618) levels was assessed using classical metrics and a novel analytic algorithm based on the fractional vessel remodeling index (FVRI), which quantifies the total change in arterial wall dimensions related to the estimated normal dimension of the vessel. A prediction model was built to estimate the normal dimension of the vessel for calculation of FVRI.Results:According to the new algorithm, “Ectatic” remodeling pattern was least common, “Complete compensatory” remodeling was present in approximately half of the instances, and “Negative” and “Incomplete compensatory” remodeling types were detected in the remaining. Compared to a traditional diagnostic scheme, FVRI-based classification seemed to better discriminate plaque composition by IVUS-VH.Conclusion:Quantitative assessment of coronary remodeling using target segment dimensions offers a promising approach to evaluate the vessel response to plaque growth/regression.
Resumo:
Magdeburg, Univ., Fak. für Verfahrens- und Systemtechnik, Diss., 2011