814 resultados para PC-algorithm
Resumo:
Application of optimization algorithm to PDE modeling groundwater remediation can greatly reduce remediation cost. However, groundwater remediation analysis requires a computational expensive simulation, therefore, effective parallel optimization could potentially greatly reduce computational expense. The optimization algorithm used in this research is Parallel Stochastic radial basis function. This is designed for global optimization of computationally expensive functions with multiple local optima and it does not require derivatives. In each iteration of the algorithm, an RBF is updated based on all the evaluated points in order to approximate expensive function. Then the new RBF surface is used to generate the next set of points, which will be distributed to multiple processors for evaluation. The criteria of selection of next function evaluation points are estimated function value and distance from all the points known. Algorithms created for serial computing are not necessarily efficient in parallel so Parallel Stochastic RBF is different algorithm from its serial ancestor. The application for two Groundwater Superfund Remediation sites, Umatilla Chemical Depot, and Former Blaine Naval Ammunition Depot. In the study, the formulation adopted treats pumping rates as decision variables in order to remove plume of contaminated groundwater. Groundwater flow and contamination transport is simulated with MODFLOW-MT3DMS. For both problems, computation takes a large amount of CPU time, especially for Blaine problem, which requires nearly fifty minutes for a simulation for a single set of decision variables. Thus, efficient algorithm and powerful computing resource are essential in both cases. The results are discussed in terms of parallel computing metrics i.e. speedup and efficiency. We find that with use of up to 24 parallel processors, the results of the parallel Stochastic RBF algorithm are excellent with speed up efficiencies close to or exceeding 100%.
Resumo:
This paper describes the formulation of a Multi-objective Pipe Smoothing Genetic Algorithm (MOPSGA) and its application to the least cost water distribution network design problem. Evolutionary Algorithms have been widely utilised for the optimisation of both theoretical and real-world non-linear optimisation problems, including water system design and maintenance problems. In this work we present a pipe smoothing based approach to the creation and mutation of chromosomes which utilises engineering expertise with the view to increasing the performance of the algorithm whilst promoting engineering feasibility within the population of solutions. MOPSGA is based upon the standard Non-dominated Sorting Genetic Algorithm-II (NSGA-II) and incorporates a modified population initialiser and mutation operator which directly targets elements of a network with the aim to increase network smoothness (in terms of progression from one diameter to the next) using network element awareness and an elementary heuristic. The pipe smoothing heuristic used in this algorithm is based upon a fundamental principle employed by water system engineers when designing water distribution pipe networks where the diameter of any pipe is never greater than the sum of the diameters of the pipes directly upstream resulting in the transition from large to small diameters from source to the extremities of the network. MOPSGA is assessed on a number of water distribution network benchmarks from the literature including some real-world based, large scale systems. The performance of MOPSGA is directly compared to that of NSGA-II with regard to solution quality, engineering feasibility (network smoothness) and computational efficiency. MOPSGA is shown to promote both engineering and hydraulic feasibility whilst attaining good infrastructure costs compared to NSGA-II.
Resumo:
O objetivo deste estudo foi analisar o papel do polimorfismo de I/D do gene da Enzima Conversora de Angiotensina (ECA) e o polimorfismo K121Q da PC-1 nas modificações das taxas de filtração glomerular (TFG), excreção urinária de albumina (EUA) e pressão arterial em uma coorte de pacientes diabéticos tipo 1 normoalbuminúricos (EUA<20μg/min) em um estudo com seguimento de 10,2 ± 2,0anos (6,5 a 13,3 anos). A EUA (imunoturbidimetria), TFG (técnica da injeção única de 51Cr-EDTA), HbA1c (cromatografia de troca iônica) e pressão arterial foram medidas no início do estudo e a intervalos de 1,7 ± 0,6 anos. O polimorfismo I/D e K121Q foram determinados através da PCR e restrição enzimática. Onze pacientes apresentaram o genótipo II, 13 o ID e 6 apresentaram o genótipo DD. Pacientes com o alelo D (ID/DD) desenvolveram mais freqüentemente hipertensão arterial e retinopatia diabética. Os 3 pacientes do estudo que desenvolveram nefropatia diabética apresentaram o alelo D. Nos pacientes ID/DD (n=19) ocorreu maior redução da TFG quando comparados com os pacientes II (n=11) (-0,39 ± 0,29 vs – 0,12 ± 0,37 ml/min/mês; P=0,035). A presença do alelo D, em análise de regressão múltipla linear (R2=0,15; F=4,92; P=0,035) foi o único fator associado à redução da TFG (-0,29 ± 0,34 ml/min/mês; P<0,05). Já o aumento da EUA (log EUA = 0,0275 ± 0,042 μg/min/mês; P=0,002) foi associado somente aos níveis iniciais de EUA (R2=0,17; F=5,72; P=0,024). Um aumento significativo (P<0,05) no desenvolvimento de hipertensão arterial e de novos casos de retinopatia diabética foi observado somente nos pacientes com os genótipos ID/DD. Vinte e dois pacientes apresentaram genótipo KK, 7 KQ e 1 apresentou genótipo QQ. Pacientes com os genótipos KQ/QQ apresentaram um aumento significativo (P=0,045) de novos casos de retinopatia diabética. Em conclusão a presença do alelo D nesta amostra de pacientes DM tipo 1 normoalbuminúricos e normotensos está associada com aumento na proporção de complicações microvasculares e hipertensão arterial.
Resumo:
É consenso na análise antitruste que o ato de concentração de empresas com participação significativa deve sofrer averiguações quanto a sua aprovação em decorrência dos efeitos prejudiciais que pode gerar sobre a concorrência na indústria. Concorrência é sempre desejável por favorecer melhores níveis de bem-estar econômico. À luz das investigações econômicas que os sistemas de defesa da concorrência realizam, este trabalho analisa as mensurações da simulação de efeitos unilaterais de concentrações horizontais. As avaliações realizadas testam a utilização do modelo PC-AIDS (Proportionaly Calibrated AIDS), de Epstein e Rubinfeld (2002). Dentre algumas conclusões que se extraem do uso do modelo temos que: (i) em mercados com baixa concentração econômica, o modelo avaliado para um intervalo da vizinhança da elasticidade-preço própria estimada, traz mensurações robustas, e (ii) para mercados com alta concentração econômica uma atenção maior deve ser dada à correspondência dos valores calibrados e estimados das elasticidades-preços próprias, para que não ocorra sub ou superestimação dos efeitos unilaterais do ato de concentração. Esse resultado é avaliado no caso Nestlé/Garoto.
Resumo:
In the last years the number of industrial applications for Augmented Reality (AR) and Virtual Reality (VR) environments has significantly increased. Optical tracking systems are an important component of AR/VR environments. In this work, a low cost optical tracking system with adequate attributes for professional use is proposed. The system works in infrared spectral region to reduce optical noise. A highspeed camera, equipped with daylight blocking filter and infrared flash strobes, transfers uncompressed grayscale images to a regular PC, where image pre-processing software and the PTrack tracking algorithm recognize a set of retro-reflective markers and extract its 3D position and orientation. Included in this work is a comprehensive research on image pre-processing and tracking algorithms. A testbed was built to perform accuracy and precision tests. Results show that the system reaches accuracy and precision levels slightly worse than but still comparable to professional systems. Due to its modularity, the system can be expanded by using several one-camera tracking modules linked by a sensor fusion algorithm, in order to obtain a larger working range. A setup with two modules was built and tested, resulting in performance similar to the stand-alone configuration.
Resumo:
LEÃO, Adriano de Castro; DÓRIA NETO, Adrião Duarte; SOUSA, Maria Bernardete Cordeiro de. New developmental stages for common marmosets (Callithrix jacchus) using mass and age variables obtained by K-means algorithm and self-organizing maps (SOM). Computers in Biology and Medicine, v. 39, p. 853-859, 2009
Resumo:
The evolution of wireless communication systems leads to Dynamic Spectrum Allocation for Cognitive Radio, which requires reliable spectrum sensing techniques. Among the spectrum sensing methods proposed in the literature, those that exploit cyclostationary characteristics of radio signals are particularly suitable for communication environments with low signal-to-noise ratios, or with non-stationary noise. However, such methods have high computational complexity that directly raises the power consumption of devices which often have very stringent low-power requirements. We propose a strategy for cyclostationary spectrum sensing with reduced energy consumption. This strategy is based on the principle that p processors working at slower frequencies consume less power than a single processor for the same execution time. We devise a strict relation between the energy savings and common parallel system metrics. The results of simulations show that our strategy promises very significant savings in actual devices.
Resumo:
Navigation based on visual feedback for robots, working in a closed environment, can be obtained settling a camera in each robot (local vision system). However, this solution requests a camera and capacity of local processing for each robot. When possible, a global vision system is a cheapest solution for this problem. In this case, one or a little amount of cameras, covering all the workspace, can be shared by the entire team of robots, saving the cost of a great amount of cameras and the associated processing hardware needed in a local vision system. This work presents the implementation and experimental results of a global vision system for mobile mini-robots, using robot soccer as test platform. The proposed vision system consists of a camera, a frame grabber and a computer (PC) for image processing. The PC is responsible for the team motion control, based on the visual feedback, sending commands to the robots through a radio link. In order for the system to be able to unequivocally recognize each robot, each one has a label on its top, consisting of two colored circles. Image processing algorithms were developed for the eficient computation, in real time, of all objects position (robot and ball) and orientation (robot). A great problem found was to label the color, in real time, of each colored point of the image, in time-varying illumination conditions. To overcome this problem, an automatic camera calibration, based on clustering K-means algorithm, was implemented. This method guarantees that similar pixels will be clustered around a unique color class. The obtained experimental results shown that the position and orientation of each robot can be obtained with a precision of few millimeters. The updating of the position and orientation was attained in real time, analyzing 30 frames per second
Resumo:
This paper presents a method for automatic identification of dust devils tracks in MOC NA and HiRISE images of Mars. The method is based on Mathematical Morphology and is able to successfully process those images despite their difference in spatial resolution or size of the scene. A dataset of 200 images from the surface of Mars representative of the diversity of those track features was considered for developing, testing and evaluating our method, confronting the outputs with reference images made manually. Analysis showed a mean accuracy of about 92%. We also give some examples on how to use the results to get information about dust devils, namelly mean width, main direction of movement and coverage per scene. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The study physical process that control the stellar evolution is strength influenced by several stellar parameters, like as rotational velocity, convective envelope mass deepening, and magnetic field intensity. In this study we analyzed the interconnection of some stellar parameters, as Lithium abundance A(Li), chromospheric activity and magnetic field intensity as well as the variation of these parameters as a function of age, rotational velocity, and the convective envelope mass deepening for a selected sample of solar analogs and twins stars. In particular, we analyzed the convective envelope mass deepening and the dispersion of lithium abundance for these stars. We also studied the evolution of rotation in subgiants stars, because its belong to the following evolutionary stage of solar analogs, and twins stars. For this analyze, we compute evolutionary models with the TGEC code to derive the evolutionary stage, as well as the convective envelope mass deepening, and derive more precisely the stellar mass, and age for this 118 stars. Our Investigation shows a considerable dispersion of lithium abundance for the solar analogs stars. We also realize that this dispersion is not by the convective zone deep, in this way we observed which the scattering of A(Li) can not be explained by classical theories of mixing in the convective zone. In conclusion we have that are necessary extra-mixing process to explain this decrease of Lithium abundance in solar analogs and twins stars. We analyzed the subgiant stars because this are the subsequent evolutionary stage after the solar analogs and twins stars. For this analysis, we compute the rotational period for 30 subgiants stars observed by Co- RoT satellite. For this task we apply two different methods: Lomb-Scargle algorithm, and the Plavchan Periodogram. We apply the TGEC code we compute models with internal distribution of angular momentum to confront the predict results with the models, and the observational results. With this analyze, we showed which solid body rotation models are incompatible with the physical interpretation of observational results. As a result of our study we still concluded that the magnetic field, convective envelope mass deepening, and internal redistribution of angular momentum are essential to explain the evolution of low-mass stars, and its observational characteristics. Based on population synthesis simulation, we concluded that the solar neighborhood presents a considerable quantity of solar twins when compared with the discovered set nowadays. Altogether we foresee the existence around 400 solar analogs in the solar neighborhood (distance of 100 pc). We also study the angular momentum of solar analogs and twins, in this study we concluded that added angular momentum from a Jupiter type planet, putted in the Jupiter position, is not enough to explain the angular momentum predicted by Kraft law (Kraft 1970)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
We present a new algorithm for Reverse Monte Carlo (RMC) simulations of liquids. During the simulations, we calculate energy, excess chemical potentials, bond-angle distributions and three-body correlations. This allows us to test the quality and physical meaning of RMC-generated results and its limitations. It also indicates the possibility to explore orientational correlations from simple scattering experiments. The new technique has been applied to bulk hard-sphere and Lennard-Jones systems and compared to standard Metropolis Monte Carlo results. (C) 1998 American Institute of Physics.
Resumo:
This work summarizes the HdHr group of Hermitian integration algorithms for dynamic structural analysis applications. It proposes a procedure for their use when nonlinear terms are present in the equilibrium equation. The simple pendulum problem is solved as a first example and the numerical results are discussed. Directions to be pursued in future research are also mentioned. Copyright (C) 2009 H.M. Bottura and A. C. Rigitano.
Resumo:
The Capacitated Centered Clustering Problem (CCCP) consists of defining a set of p groups with minimum dissimilarity on a network with n points. Demand values are associated with each point and each group has a demand capacity. The problem is well known to be NP-hard and has many practical applications. In this paper, the hybrid method Clustering Search (CS) is implemented to solve the CCCP. This method identifies promising regions of the search space by generating solutions with a metaheuristic, such as Genetic Algorithm, and clustering them into clusters that are then explored further with local search heuristics. Computational results considering instances available in the literature are presented to demonstrate the efficacy of CS. (C) 2010 Elsevier Ltd. All rights reserved.