982 resultados para Inside-Outside Algorithm
Resumo:
ABSTRACT OBJECTIVE To estimate the prevalences of tobacco use, tobacco experimentation, and frequent smoking among Brazilian adolescents. METHODS We evaluated participants of the cross-sectional, nation-wide, school-based Study of Cardiovascular Risks in Adolescents (ERICA), which included 12- to 17-year-old adolescents from municipalities of over 100 thousand inhabitants. The study sample had a clustered, stratified design and was representative of the whole country, its geographical regions, and all 27 state capitals. The information was obtained with self-administered questionnaires. Tobacco experimentation was defined as having tried cigarettes at least once in life. Adolescents who had smoked on at least one day over the previous 30 days were considered current cigarette smokers. Having smoked cigarettes for at least seven consecutive days was an indicator for regular consumption of tobacco. Considering the complex sampling design, prevalences and 95% confidence intervals were estimated according to sociodemographic and socio-environmental characteristics. RESULTS We evaluated 74,589 adolescents. Among these, 18.5% (95%CI 17.7-19.4) had smoked at least once in life, 5.7% (95%CI 5.3-6.2) smoked at the time of the research, and 2.5% (95%CI 2.2-2.8) smoked often. Adolescents aged 15 to 17 years had higher prevalences for all indicators than those aged 12 to 14 years. The prevalences did not differ significantly between sexes. The highest prevalences were found in the South region and the lowest ones, in the Northeast region. Regardless of sex, the prevalences were found to be higher for adolescents who had had paid jobs, who lived with only one parent, and who reported having been in contact with smokers either inside or outside their homes. Female public school adolescents were found to smoke more than the ones from private schools. CONCLUSIONS Tobacco use among adolescents is still a challenge. Intending to reduce the prevalence of tobacco use among young people, especially the ones under socioeconomic vulnerability conditions, Brazil must consolidate and increase effective public health care measures.
Resumo:
This paper presents a genetic algorithm for the Resource Constrained Project Scheduling Problem (RCPSP). The chromosome representation of the problem is based on random keys. The schedule is constructed using a heuristic priority rule in which the priorities of the activities are defined by the genetic algorithm. The heuristic generates parameterized active schedules. The approach was tested on a set of standard problems taken from the literature and compared with other approaches. The computational results validate the effectiveness of the proposed algorithm.
Resumo:
Relatório da Prática Profissional Supervisionada Mestrado em Educação Pré-Escolar
Resumo:
This work addresses the signal propagation and the fractional-order dynamics during the evolution of a genetic algorithm (GA). In order to investigate the phenomena involved in the GA population evolution, the mutation is exposed to excitation perturbations during some generations and the corresponding fitness variations are evaluated. Three distinct fitness functions are used to study their influence in the GA dynamics. The input and output signals are studied revealing a fractional-order dynamic evolution, characteristic of a long-term system memory.
Resumo:
Esta dissertação teve como objetivo o estudo de uma central de climatização adiabática, que tem como finalidade controlar a temperatura e a humidade de um salão com equipamentos de torcedura e de bobinagem, pertencente à Continental - ITA. Foi realizado um levantamento de dados relativamente à temperatura e humidade interior e exterior do referido salão. Verificou-se que estes parâmetros não estavam dentro dos valores ótimos desejados, 26 ± 1˚C e 50 ± 5%, e por isso foi necessário estimar as necessidades nominais de arrefecimento. Este valor foi determinado a partir do Regulamento das Características de Comportamento Térmico dos Edifícios (RCCTE), obtendo-se o valor de 79 kWh/m2.˚C. No sentido de avaliar se as centrais de climatização instaladas no salão em estudo satisfaziam estas necessidades, calcularam-se as suas capacidades de arrefecimento obtendo-se um valor máximo de 64 kWh/m2.˚C. Paralelamente a este estudo, foi calculada a eficiência de humidificação para cada central nos meses de março e setembro. Os valores obtidos foram oscilantes obtendo-se um valor máximo de 100% em setembro. Este fato deve-se à temperatura exterior neste mês ser mais alta e, por consequência, a eficiência de humidificação da central é maior, pois a quantidade de água que o ar pode conter na sua composição é também mais elevada. Com o objetivo de colmatar a diferença entre as necessidades nominais de arrefecimento e a capacidade de arrefecimento das centrais, foram analisadas algumas soluções que, a serem implementadas, poderiam ajudar na poupança energética. Uma dessas soluções era a substituição do sistema atual de humidificação por um sistema mais eficiente de alta pressão. Com o estudo económico deste investimento obteve-se um período de retorno de dois anos. Foram ainda apresentados mais dois investimentos onde foi alterado o sistema de controlo automático existente, obtendo-se para um, dois anos de período de retorno e para o outro três anos e meio.
Resumo:
IEEE International Symposium on Circuits and Systems, pp. 724 – 727, Seattle, EUA
Resumo:
Volatile organic compounds are a common source of groundwater contamination that can be easily removed by air stripping in columns with random packing and using a counter-current flow between the phases. This work proposes a new methodology for the column design for any particular type of packing and contaminant avoiding the necessity of a pre-defined diameter used in the classical approach. It also renders unnecessary the employment of the graphical Eckert generalized correlation for pressure drop estimates. The hydraulic features are previously chosen as a project criterion and only afterwards the mass transfer phenomena are incorporated, in opposition to conventional approach. The design procedure was translated into a convenient algorithm using C++ as programming language. A column was built in order to test the models used either in the design or in the simulation of the column performance. The experiments were fulfilled using a solution of chloroform in distilled water. Another model was built to simulate the operational performance of the column, both in steady state and in transient conditions. It consists in a system of two partial non linear differential equations (distributed parameters). Nevertheless, when flows are steady, the system became linear, although there is not an evident solution in analytical terms. In steady state the resulting system of ODE can be solved, allowing for the calculation of the concentration profile in both phases inside the column. In transient state the system of PDE was numerically solved by finite differences, after a previous linearization.
Resumo:
Dissertação apresentada para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Etnomusicologia
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Vilar de Frades church is integrated in the Vilar de Frades Monastery, located in the North part of Portugal (Barcelos). The monastery, founded in 566, suffered several architectural modifications and restoration works, the most relevant was in the XVI century. The church, in granite, has one nave and six bays,holding ten chapels with vaults of crossed ribbings. Nowadays, the chapels present a severe biological colonization characterised by an intense green biofilm, which becoming apparent in other locations inside the church. In the course of a general survey concerning the conservation state of the church, an accurate campaign was planned in order to assess the main biodeterioration agents, map biological colonization and determine the environmental conditions. Laboratory analyses were accomplished with optical microscopy and spectrofluorometry. This study presents the results of this campaign. Details on conservation or preservation works that need to be implemented are also presented.
Resumo:
Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.
Resumo:
An adaptive antenna array combines the signal of each element, using some constraints to produce the radiation pattern of the antenna, while maximizing the performance of the system. Direction of arrival (DOA) algorithms are applied to determine the directions of impinging signals, whereas beamforming techniques are employed to determine the appropriate weights for the array elements, to create the desired pattern. In this paper, a detailed analysis of both categories of algorithms is made, when a planar antenna array is used. Several simulation results show that it is possible to point an antenna array in a desired direction based on the DOA estimation and on the beamforming algorithms. A comparison of the performance in terms of runtime and accuracy of the used algorithms is made. These characteristics are dependent on the SNR of the incoming signal.
Resumo:
“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.
Resumo:
In this paper, we present a deterministic approach to tsunami hazard assessment for the city and harbour of Sines, Portugal, one of the test sites of project ASTARTE (Assessment, STrategy And Risk Reduction for Tsunamis in Europe). Sines has one of the most important deep-water ports, which has oil-bearing, petrochemical, liquid-bulk, coal, and container terminals. The port and its industrial infrastructures face the ocean southwest towards the main seismogenic sources. This work considers two different seismic zones: the Southwest Iberian Margin and the Gloria Fault. Within these two regions, we selected a total of six scenarios to assess the tsunami impact at the test site. The tsunami simulations are computed using NSWING, a Non-linear Shallow Water model wIth Nested Grids. In this study, the static effect of tides is analysed for three different tidal stages: MLLW (mean lower low water), MSL (mean sea level), and MHHW (mean higher high water). For each scenario, the tsunami hazard is described by maximum values of wave height, flow depth, drawback, maximum inundation area and run-up. Synthetic waveforms are computed at virtual tide gauges at specific locations outside and inside the harbour. The final results describe the impact at the Sines test site considering the single scenarios at mean sea level, the aggregate scenario, and the influence of the tide on the aggregate scenario. The results confirm the composite source of Horseshoe and Marques de Pombal faults as the worst-case scenario, with wave heights of over 10 m, which reach the coast approximately 22 min after the rupture. It dominates the aggregate scenario by about 60 % of the impact area at the test site, considering maximum wave height and maximum flow depth. The HSMPF scenario inundates a total area of 3.5 km2. © Author(s) 2015.
Resumo:
This paper presents a new parallel implementation of a previously hyperspectral coded aperture (HYCA) algorithm for compressive sensing on graphics processing units (GPUs). HYCA method combines the ideas of spectral unmixing and compressive sensing exploiting the high spatial correlation that can be observed in the data and the generally low number of endmembers needed in order to explain the data. The proposed implementation exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs using shared memory and coalesced accesses to memory. The proposed algorithm is evaluated not only in terms of reconstruction error but also in terms of computational performance using two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN. Experimental results using real data reveals signficant speedups up with regards to serial implementation.