888 resultados para locality algorithms
Resumo:
Latency can be defined as the sum of the arrival times at the customers. Minimum latency problems are specially relevant in applications related to humanitarian logistics. This thesis presents algorithms for solving a family of vehicle routing problems with minimum latency. First the latency location routing problem (LLRP) is considered. It consists of determining the subset of depots to be opened, and the routes that a set of homogeneous capacitated vehicles must perform in order to visit a set of customers such that the sum of the demands of the customers assigned to each vehicle does not exceed the capacity of the vehicle. For solving this problem three metaheuristic algorithms combining simulated annealing and variable neighborhood descent, and an iterated local search (ILS) algorithm, are proposed. Furthermore, the multi-depot cumulative capacitated vehicle routing problem (MDCCVRP) and the multi-depot k-traveling repairman problem (MDk-TRP) are solved with the proposed ILS algorithm. The MDCCVRP is a special case of the LLRP in which all the depots can be opened, and the MDk-TRP is a special case of the MDCCVRP in which the capacity constraints are relaxed. Finally, a LLRP with stochastic travel times is studied. A two-stage stochastic programming model and a variable neighborhood search algorithm are proposed for solving the problem. Furthermore a sampling method is developed for tackling instances with an infinite number of scenarios. Extensive computational experiments show that the proposed methods are effective for solving the problems under study.
Resumo:
The main objective of my thesis work is to exploit the Google native and open-source platform Kubeflow, specifically using Kubeflow pipelines, to execute a Federated Learning scalable ML process in a 5G-like and simplified test architecture hosting a Kubernetes cluster and apply the largely adopted FedAVG algorithm and FedProx its optimization empowered by the ML platform ‘s abilities to ease the development and production cycle of this specific FL process. FL algorithms are more are and more promising and adopted both in Cloud application development and 5G communication enhancement through data coming from the monitoring of the underlying telco infrastructure and execution of training and data aggregation at edge nodes to optimize the global model of the algorithm ( that could be used for example for resource provisioning to reach an agreed QoS for the underlying network slice) and after a study and a research over the available papers and scientific articles related to FL with the help of the CTTC that suggests me to study and use Kubeflow to bear the algorithm we found out that this approach for the whole FL cycle deployment was not documented and may be interesting to investigate more in depth. This study may lead to prove the efficiency of the Kubeflow platform itself for this need of development of new FL algorithms that will support new Applications and especially test the FedAVG algorithm performances in a simulated client to cloud communication using a MNIST dataset for FL as benchmark.
Resumo:
Al contrario dei computer classici, i computer quantistici lavorano tramite le leggi della meccanica quantistica, e pertanto i qubit, ovvero l'unità base di informazione quantistica, possiedono proprietà estremamente interessanti di sovrapposizione ed entanglement. Queste proprietà squisitamente quantistiche sono alla base di innumerevoli algoritmi, i quali sono in molti casi più performanti delle loro controparti classiche. Obiettivo di questo lavoro di tesi è introdurre dal punto di vista teorico la logica computazionale quantistica e di riassumere brevemente una classe di tali algoritmi quantistici, ossia gli algoritmi di Quantum Phase Estimation, il cui scopo è stimare con precisione arbitraria gli autovalori di un dato operatore unitario. Questi algoritmi giocano un ruolo cruciale in vari ambiti della teoria dell'informazione quantistica e pertanto verranno presentati anche i risultati dell'implementazione degli algoritmi discussi sia su un simulatore che su un vero computer quantistico.
Resumo:
This work proposes the analysis of tracking algorithms for point objects and extended targets particle filter on a radar application problem. Through simulations, the number of particles, the process and measurement noise of particle filter have been optimized. Four different scenarios have been considered in this work: point object with linear trajectory, point object with non-linear trajectory, extended object with linear trajectory, extended object with non-linear trajectory. The extended target has been modelled as an ellipse parametrized by the minor and major axes, the orientation angle, and the center coordinates (5 parameters overall).
Resumo:
The decomposition of Feynman integrals into a basis of independent master integrals is an essential ingredient of high-precision theoretical predictions, that often represents a major bottleneck when processes with a high number of loops and legs are involved. In this thesis we present a new algorithm for the decomposition of Feynman integrals into master integrals with the formalism of intersection theory. Intersection theory is a novel approach that allows to decompose Feynman integrals into master integrals via projections, based on a scalar product between Feynman integrals called intersection number. We propose a new purely rational algorithm for the calculation of intersection numbers of differential $n-$forms that avoids the presence of algebraic extensions. We show how expansions around non-rational poles, which are a bottleneck of existing algorithms for intersection numbers, can be avoided by performing an expansion in series around a rational polynomial irreducible over $\mathbb{Q}$, that we refer to as $p(z)-$adic expansion. The algorithm we developed has been implemented and tested on several diagrams, both at one and two loops.
Resumo:
Radio Simultaneous Location and Mapping (SLAM) consists of the simultaneous tracking of the target and estimation of the surrounding environment, to build a map and estimate the target movements within it. It is an increasingly exploited technique for automotive applications, in order to improve the localization of obstacles and the target relative movement with respect to them, for emergency situations, for example when it is necessary to explore (with a drone or a robot) environments with a limited visibility, or for personal radar applications, thanks to its versatility and cheapness. Until today, these systems were based on light detection and ranging (lidar) or visual cameras, high-accuracy and expensive approaches that are limited to specific environments and weather conditions. Instead, in case of smoke, fog or simply darkness, radar-based systems can operate exactly in the same way. In this thesis activity, the Fourier-Mellin algorithm is analyzed and implemented, to verify the applicability to Radio SLAM, in which the radar frames can be treated as images and the radar motion between consecutive frames can be covered with registration. Furthermore, a simplified version of that algorithm is proposed, in order to solve the problems of the Fourier-Mellin algorithm when working with real radar images and improve the performance. The INRAS RBK2, a MIMO 2x16 mmWave radar, is used for experimental acquisitions, consisting of multiple tests performed in Lab-E of the Cesena Campus, University of Bologna. The different performances of Fourier-Mellin and its simplified version are compared also with the MatchScan algorithm, a classic algorithm for SLAM systems.
Resumo:
Modern High-Performance Computing HPC systems are gradually increasing in size and complexity due to the correspondent demand of larger simulations requiring more complicated tasks and higher accuracy. However, as side effects of the Dennard’s scaling approaching its ultimate power limit, the efficiency of software plays also an important role in increasing the overall performance of a computation. Tools to measure application performance in these increasingly complex environments provide insights into the intricate ways in which software and hardware interact. The monitoring of the power consumption in order to save energy is possible through processors interfaces like Intel Running Average Power Limit RAPL. Given the low level of these interfaces, they are often paired with an application-level tool like Performance Application Programming Interface PAPI. Since several problems in many heterogeneous fields can be represented as a complex linear system, an optimized and scalable linear system solver algorithm can decrease significantly the time spent to compute its resolution. One of the most widely used algorithms deployed for the resolution of large simulation is the Gaussian Elimination, which has its most popular implementation for HPC systems in the Scalable Linear Algebra PACKage ScaLAPACK library. However, another relevant algorithm, which is increasing in popularity in the academic field, is the Inhibition Method. This thesis compares the energy consumption of the Inhibition Method and Gaussian Elimination from ScaLAPACK to profile their execution during the resolution of linear systems above the HPC architecture offered by CINECA. Moreover, it also collates the energy and power values for different ranks, nodes, and sockets configurations. The monitoring tools employed to track the energy consumption of these algorithms are PAPI and RAPL, that will be integrated with the parallel execution of the algorithms managed with the Message Passing Interface MPI.
Resumo:
A miniaturised gas analyser is described and evaluated based on the use of a substrate-integrated hollow waveguide (iHWG) coupled to a microsized near-infrared spectrophotometer comprising a linear variable filter and an array of InGaAs detectors. This gas sensing system was applied to analyse surrogate samples of natural fuel gas containing methane, ethane, propane and butane, quantified by using multivariate regression models based on partial least square (PLS) algorithms and Savitzky-Golay 1(st) derivative data preprocessing. The external validation of the obtained models reveals root mean square errors of prediction of 0.37, 0.36, 0.67 and 0.37% (v/v), for methane, ethane, propane and butane, respectively. The developed sensing system provides particularly rapid response times upon composition changes of the gaseous sample (approximately 2 s) due the minute volume of the iHWG-based measurement cell. The sensing system developed in this study is fully portable with a hand-held sized analyser footprint, and thus ideally suited for field analysis. Last but not least, the obtained results corroborate the potential of NIR-iHWG analysers for monitoring the quality of natural gas and petrochemical gaseous products.
Resumo:
Diabetic Retinopathy (DR) is a complication of diabetes that can lead to blindness if not readily discovered. Automated screening algorithms have the potential to improve identification of patients who need further medical attention. However, the identification of lesions must be accurate to be useful for clinical application. The bag-of-visual-words (BoVW) algorithm employs a maximum-margin classifier in a flexible framework that is able to detect the most common DR-related lesions such as microaneurysms, cotton-wool spots and hard exudates. BoVW allows to bypass the need for pre- and post-processing of the retinographic images, as well as the need of specific ad hoc techniques for identification of each type of lesion. An extensive evaluation of the BoVW model, using three large retinograph datasets (DR1, DR2 and Messidor) with different resolution and collected by different healthcare personnel, was performed. The results demonstrate that the BoVW classification approach can identify different lesions within an image without having to utilize different algorithms for each lesion reducing processing time and providing a more flexible diagnostic system. Our BoVW scheme is based on sparse low-level feature detection with a Speeded-Up Robust Features (SURF) local descriptor, and mid-level features based on semi-soft coding with max pooling. The best BoVW representation for retinal image classification was an area under the receiver operating characteristic curve (AUC-ROC) of 97.8% (exudates) and 93.5% (red lesions), applying a cross-dataset validation protocol. To assess the accuracy for detecting cases that require referral within one year, the sparse extraction technique associated with semi-soft coding and max pooling obtained an AUC of 94.2 ± 2.0%, outperforming current methods. Those results indicate that, for retinal image classification tasks in clinical practice, BoVW is equal and, in some instances, surpasses results obtained using dense detection (widely believed to be the best choice in many vision problems) for the low-level descriptors.
Resumo:
The poison frog genus Ameerega (Dendrobatidae) currently contains 32 species. They are distributed from central Brazil into western Amazonia to the lower Andean versant. In addition, three trans-Andean species have been allocated to Ameerega (Andrade et al. 2013; Frost 2014). Ameerega berohoka (Vaz-Silva & Maciel 2011) was described based on specimens from central Brazil (type-locality: Arenópolis, GO) and it is assumed to occur in parts of western and southwestern state of Goiás (Frost 2014). More recently, Andrade et al. (2013) extended its distribution to the state of Mato Grosso. Here we re-describe the advertisement call of A. berohoka, providing additional information regarding its temporal structure and spectral traits. Our observations also consist of a new distribution record for this species to the state of Mato Grosso.
Resumo:
The taxonomic status of a disjunctive population of Phyllomedusa from southern Brazil was diagnosed using molecular, chromosomal, and morphological approaches, which resulted in the recognition of a new species of the P. hypochondrialis group. Here, we describe P. rustica sp. n. from the Atlantic Forest biome, found in natural highland grassland formations on a plateau in the south of Brazil. Phylogenetic inferences placed P. rustica sp. n. in a subclade that includes P. rhodei + all the highland species of the clade. Chromosomal morphology is conservative, supporting the inference of homologies among the karyotypes of the species of this genus. Phyllomedusa rustica is apparently restricted to its type-locality, and we discuss the potential impact on the strategies applied to the conservation of the natural grassland formations found within the Brazilian Atlantic Forest biome in southern Brazil. We suggest that conservation strategies should be modified to guarantee the preservation of this species.
Resumo:
Current literature has elucidated a new phenotype, metabolically healthy obese (MHO), with risks of cardiovascular disease similar to that of normal weight individuals. Few studies have examined the MHO phenotype in an aging population, especially in association with subclinical CVD. This cross sectional study population consisted of 208 octogenarians and older. Anthropometrics, biochemical, and radiological parameters were measured to assess obesity, metabolic health (assessed by the National Cholesterol Education Program -Adult Treatment Panel (NCEP-ATP III) criteria), and subclinical measures of CVD. The prevalence of MHO was 13.5% (N = 28). No significant association with MHO was noted for age, coronary artery calcium score, cIMT, or hs-CRP > 3 mg/dl (p = NS). Our results suggest that the MHO phenotype exists in the elderly; however, subclinical CVD measures were not different in sub-group analysis suggesting traditional metabolic risk factor algorithms may not be accurate in the very elderly.
Resumo:
Monte Carlo track structures (MCTS) simulations have been recognized as useful tools for radiobiological modeling. However, the authors noticed several issues regarding the consistency of reported data. Therefore, in this work, they analyze the impact of various user defined parameters on simulated direct DNA damage yields. In addition, they draw attention to discrepancies in published literature in DNA strand break (SB) yields and selected methodologies. The MCTS code Geant4-DNA was used to compare radial dose profiles in a nanometer-scale region of interest (ROI) for photon sources of varying sizes and energies. Then, electron tracks of 0.28 keV-220 keV were superimposed on a geometric DNA model composed of 2.7 × 10(6) nucleosomes, and SBs were simulated according to four definitions based on energy deposits or energy transfers in DNA strand targets compared to a threshold energy ETH. The SB frequencies and complexities in nucleosomes as a function of incident electron energies were obtained. SBs were classified into higher order clusters such as single and double strand breaks (SSBs and DSBs) based on inter-SB distances and on the number of affected strands. Comparisons of different nonuniform dose distributions lacking charged particle equilibrium may lead to erroneous conclusions regarding the effect of energy on relative biological effectiveness. The energy transfer-based SB definitions give similar SB yields as the one based on energy deposit when ETH ≈ 10.79 eV, but deviate significantly for higher ETH values. Between 30 and 40 nucleosomes/Gy show at least one SB in the ROI. The number of nucleosomes that present a complex damage pattern of more than 2 SBs and the degree of complexity of the damage in these nucleosomes diminish as the incident electron energy increases. DNA damage classification into SSB and DSB is highly dependent on the definitions of these higher order structures and their implementations. The authors' show that, for the four studied models, different yields are expected by up to 54% for SSBs and by up to 32% for DSBs, as a function of the incident electrons energy and of the models being compared. MCTS simulations allow to compare direct DNA damage types and complexities induced by ionizing radiation. However, simulation results depend to a large degree on user-defined parameters, definitions, and algorithms such as: DNA model, dose distribution, SB definition, and the DNA damage clustering algorithm. These interdependencies should be well controlled during the simulations and explicitly reported when comparing results to experiments or calculations.
Resumo:
Biogeography and metacommunity ecology provide two different perspectives on species diversity. Both are spatial in nature but their spatial scales do not necessarily match. With recent boom of metacommunity studies, we see an increasing need for clear discrimination of spatial scales relevant for both perspectives. This discrimination is a necessary prerequisite for improved understanding of ecological phenomena across scales. Here we provide a case study to illustrate some spatial scale-dependent concepts in recent metacommunity studies and identify potential pitfalls. We presented here the diversity patterns of Neotropical lepidopterans and spiders viewed both from metacommunity and biogeographical perspectives. Specifically, we investigated how the relative importance of niche- and dispersal-based processes for community assembly change at two spatial scales: metacommunity scale, i.e. within a locality, and biogeographical scale, i.e. among localities widely scattered along a macroclimatic gradient. As expected, niche-based processes dominated the community assembly at metacommunity scale, while dispersal-based processes played a major role at biogeographical scale for both taxonomical groups. However, we also observed small but significant spatial effects at metacommunity scale and environmental effects at biogeographical scale. We also observed differences in diversity patterns between the two taxonomical groups corresponding to differences in their dispersal modes. Our results thus support the idea of continuity of processes interactively shaping diversity patterns across scales and emphasize the necessity of integration of metacommunity and biogeographical perspectives.
Resumo:
Matrix-assisted laser desorption/ionization time-of flight mass spectrometry (MALDI-TOF MS) has been widely used for the identification and classification of microorganisms based on their proteomic fingerprints. However, the use of MALDI-TOF MS in plant research has been very limited. In the present study, a first protocol is proposed for metabolic fingerprinting by MALDI-TOF MS using three different MALDI matrices with subsequent multivariate data analysis by in-house algorithms implemented in the R environment for the taxonomic classification of plants from different genera, families and orders. By merging the data acquired with different matrices, different ionization modes and using careful algorithms and parameter selection, we demonstrate that a close taxonomic classification can be achieved based on plant metabolic fingerprints, with 92% similarity to the taxonomic classifications found in literature. The present work therefore highlights the great potential of applying MALDI-TOF MS for the taxonomic classification of plants and, furthermore, provides a preliminary foundation for future research.