963 resultados para multi-source noise


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES The SOURCE XT Registry (Edwards SAPIEN XT Aortic Bioprosthesis Multi-Region Outcome Registry) assessed the use and clinical outcomes with the SAPIEN XT (Edwards Lifesciences, Irvine, California) valve in the real-world setting. BACKGROUND Transcatheter aortic valve replacement is an established treatment for high-risk/inoperable patients with severe aortic stenosis. The SAPIEN XT is a balloon-expandable valve with enhanced features allowing delivery via a lower profile sheath. METHODS The SOURCE XT Registry is a prospective, multicenter, post-approval study. Data from 2,688 patients at 99 sites were analyzed. The main outcome measures were all-cause mortality, stroke, major vascular complications, bleeding, and pacemaker implantations at 30-days and 1 year post-procedure. RESULTS The mean age was 81.4 ± 6.6 years, 42.3% were male, and the mean logistic EuroSCORE (European System for Cardiac Operative Risk Evaluation) was 20.4 ± 12.4%. Patients had a high burden of coronary disease (44.2%), diabetes (29.4%), renal insufficiency (28.9%), atrial fibrillation (25.6%), and peripheral vascular disease (21.2%). Survival was 93.7% at 30 days and 80.6% at 1 year. At 30-day follow-up, the stroke rate was 3.6%, the rate of major vascular complications was 6.5%, the rate of life-threatening bleeding was 5.5%, the rate of new pacemakers was 9.5%, and the rate of moderate/severe paravalvular leak was 5.5%. Multivariable analysis identified nontransfemoral approach (hazard ratio [HR]: 1.84; p < 0.0001), renal insufficiency (HR: 1.53; p < 0.0001), liver disease (HR: 1.67; p = 0.0453), moderate/severe tricuspid regurgitation (HR: 1.47; p = 0.0019), porcelain aorta (HR: 1.47; p = 0.0352), and atrial fibrillation (HR: 1.41; p = 0.0014), with the highest HRs for 1-year mortality. Major vascular complications and major/life-threatening bleeding were the most frequently seen complications associated with a significant increase in 1-year mortality. CONCLUSIONS The SOURCE XT Registry demonstrated appropriate use of the SAPIEN XT THV in the first year post-commercialization in Europe. The safety profile is sustained, and clinical benefits have been established in the real-world setting. (SOURCE XT Registry; NCT01238497).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES Readout-segmented echo planar imaging (rs-EPI) significantly reduces susceptibility artifacts in diffusion-weighted imaging (DWI) of the breast compared to single-shot EPI but is limited by longer scan times. To compensate for this, we tested a new simultaneous multi-slice (SMS) acquisition for accelerated rs-EPI. MATERIALS AND METHODS After approval by the local ethics committee, eight healthy female volunteers (age, 38.9±13.1 years) underwent breast MRI at 3T. Conventional as well as two-fold (2× SMS) and three-fold (3× SMS) slice-accelerated rs-EPI sequences were acquired at b-values of 50 and 800s/mm(2). Two independent readers analyzed the apparent diffusion coefficient (ADC) in fibroglandular breast parenchyma. The signal-to-noise ratio (SNR) was estimated based on the subtraction method. ADC and SNR were compared between sequences by using the Friedman test. RESULTS The acquisition time was 4:21min for conventional rs-EPI, 2:35min for 2× SMS rs-EPI and 1:44min for 3× SMS rs-EPI. ADC values were similar in all sequences (mean values 1.62×10(-3)mm(2)/s, p=0.99). Mean SNR was 27.7-29.6, and no significant differences were found among the sequences (p=0.83). CONCLUSION SMS rs-EPI yields similar ADC values and SNR compared to conventional rs-EPI at markedly reduced scan time. Thus, SMS excitation increases the clinical applicability of rs-EPI for DWI of the breast.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Information-centric networking (ICN) offers new perspectives on mobile ad-hoc communication because routing is based on names but not on endpoint identifiers. Since every content object has a unique name and is signed, authentic content can be stored and cached by any node. If connectivity to a content source breaks, it is not necessarily required to build a new path to the same source but content can also be retrieved from a closer node that provides the same content copy. For example, in case of collisions, retransmissions do not need to be performed over the entire path but due to caching only over the link where the collision occurred. Furthermore, multiple requests can be aggregated to improve scalability of wireless multi-hop communication. In this work, we base our investigations on Content-Centric Networking (CCN), which is a popular {ICN} architecture. While related works in wireless {CCN} communication are based on broadcast communication exclusively, we show that this is not needed for efficient mobile ad-hoc communication. With Dynamic Unicast requesters can build unicast paths to content sources after they have been identified via broadcast. We have implemented Dynamic Unicast in CCNx, which provides a reference implementation of the {CCN} concepts, and performed extensive evaluations in diverse mobile scenarios using NS3-DCE, the direct code execution framework for the {NS3} network simulator. Our evaluations show that Dynamic Unicast can result in more efficient communication than broadcast communication, but still supports all {CCN} advantages such as caching, scalability and implicit content discovery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The reduction in sea ice along the SE Greenland coast during the last century has severely impacted ice-rafting to this area. In order to reconstruct ice-rafting and oceanographic conditions in the area of Denmark Strait during the last ~150 years, we conducted a multiproxy study on three short (20 cm) sediment cores from outer Kangerdlugssuaq Trough (~300 m water depth). The proxy-based data obtained have been compared with historical and instrumental data to gain a better understanding of the ice sheet-ocean interactions in the area. A robust chronology has been developed based on 210Pb and 137Cs measurements on core PO175GKC#9 (~66.2°N, 32°W) and expanded to the two adjacent cores based on correlations between calcite weight percent records. Our proxy records include sea-ice and phytoplankton biomarkers, and a variety of mineralogical determinations based on the <2 mm sediment fraction, including identification with quantitative x-ray diffraction, ice-rafted debris counts on the 63-150 µm sand fraction, and source identifications based on the composition of Fe oxides in the 45-250 µm fraction. A multivariate statistical analysis indicated significant correlations between our proxy records and historical data, especially with the mean annual temperature data from Stykkishólmur (Iceland) and the storis index (historical observations of sea-ice export via the East Greenland Current). In particular, the biological proxies (calcite weight percent, IP25, and total organic carbon %) showed significant linkage with the storis index. Our records show two distinct intervals in the recent history of the SE Greenland coast. The first of these (ad 1850-1910) shows predominantly perennial sea-ice conditions in the area, while the second (ad 1910-1990) shows more seasonally open water conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present and examine a multi-sensor global compilation of mid-Holocene (MH) sea surface temperatures (SST), based on Mg/Ca and alkenone palaeothermometry and reconstructions obtained using planktonic foraminifera and organic-walled dinoflagellate cyst census counts. We assess the uncertainties originating from using different methodologies and evaluate the potential of MH SST reconstructions as a benchmark for climate-model simulations. The comparison between different analytical approaches (time frame, baseline climate) shows the choice of time window for the MH has a negligible effect on the reconstructed SST pattern, but the choice of baseline climate affects both the magnitude and spatial pattern of the reconstructed SSTs. Comparison of the SST reconstructions made using different sensors shows significant discrepancies at a regional scale, with uncertainties often exceeding the reconstructed SST anomaly. Apparent patterns in SST may largely be a reflection of the use of different sensors in different regions. Overall, the uncertainties associated with the SST reconstructions are generally larger than the MH anomalies. Thus, the SST data currently available cannot serve as a target for benchmarking model simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clay mineral and bulk chemical (Si, Al, K, Mg, Sr, La, Ce, Nd) analyses of terrigenous surface sediments on the Siberian-Arctic shelf indicate that there are five regions with distinct, or endmember, sedimentary compositions. The formation of these geochemical endmembers is controlled by sediment provenance and grain size sorting. (1) The shale endmember (Al, K and REE rich sediment) is eroded from fine-grained marine sedimentary rocks of the Verkhoyansk Mountains and Kolyma-Omolon superterrain, and discharged to the shelf by the Lena, Yana, Indigirka and Kolyma Rivers. (2) The basalt endmember (Mg rich) originates from NE Siberia's Okhotsk-Chukotsk volcanic belt and Bering Strait inflow, and is prevalent in Chukchi Sea Sediments. Concentrations of the volcanically derived clay mineral smectite are elevated in Chukchi fine-fraction sediments, corroborating the conclusion that Chukchi sediments are volcanic in origin. (3) The mature sandstone endmember (Si rich) is found proximal to Wrangel Island and sections of the Chukchi Sea's Siberian coast and is derived from the sedimentary Chukotka terrain that comprises these landmasses. (4) The immature sandstone endmember (Sr rich) is abundant in the New Siberian Island region and reflects inputs from sedimentary rocks that comprise the islands. (5) The immature sandstone endmember is also prevalent in the western Laptev Sea, where it is eroded from sedimentary deposits blanketing the Siberian platform that are compositionally similar to those on the New Siberian Islands. Western Laptev can be distinguished from New Siberian Island region sediments by their comparatively elevated smectite concentrations and the presence of the basalt endmember, which indicate Siberian platform flood basalts are also a source of western Laptev sediments. In certain locations grain size sorting noticeably affects shelf sediment chemistry. (1) Erosion of fines by currents and sediment ice rafting contributes to the formation of the coarse-grained sandstone endmembers. (2) Bathymetrically controlled grain size sorting, in which fines preferentially accumulate offshore in deeper, less energetic water, helps distribute the fine-grained shale and basalt endmembers. An important implication of these results is that the observed sedimentary geochemical endmembers provide new markers of sediment provenance, which can be used to track sediment transport, ice-rafted debris dispersal or the movement of particle-reactive contaminants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A database containing the global and diffuse components of the surface solar hourly irradiation measured from 1 January 2004 to 31 December 2010 at eight stations of the Egyptian Meteorological Authority is presented. For three of these sites (Cairo, Aswan, and El-Farafra), the direct component is also available. In addition, a series of meteorological variables including surface pressure, relative humidity, temperature, wind speed and direction is provided at the same hourly resolution at all stations. The details of the experimental sites and instruments used for the acquisition are given. Special attention is paid to the quality of the data and the procedure applied to flag suspicious or erroneous measurements is described in details. Between 88 and 99% of the daytime measurements are validated by this quality control. Except at Barrani where the number is lower (13500), between 20000 and 29000 measurements of global and diffuse hourly irradiation are available at all sites for the 7-year period. Similarly, from 9000 to 13000 measurements of direct hourly irradiation values are provided for the three sites where this component is measured. With its high temporal resolution this consistent irradiation and meteorological database constitutes a reliable source to estimate the potential of solar energy in Egypt. It is also adapted to the study of high-frequency atmospheric processes such as the impact of aerosols on atmospheric radiative transfer. In the next future, it is planned to complete regularly the present 2004-2010 database.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing importance of pollutant noise has led to the creation of many new noise testing laboratories in recent years. For this reason and due to the legal implications that noise reporting may have, it is necessary to create procedures intended to guarantee the quality of the testing and its results. For instance, the ISO/IEC standard 17025:2005 specifies general requirements for the competence of testing laboratories. In this standard, interlaboratory comparisons are one of the main measures that must be applied to guarantee the quality of laboratories when applying specific methodologies for testing. In the specific case of environmental noise, round robin tests are usually difficult to design, as it is difficult to find scenarios that can be available and controlled while the participants carry out the measurements. Monitoring and controlling the factors that can influence the measurements (source emissions, propagation, background noise…) is not usually affordable, so the most extended solution is to create very effortless scenarios, where most of the factors that can have an influence on the results are excluded (sampling, processing of results, background noise, source detection…) The new approach described in this paper only requires the organizer to make actual measurements (or prepare virtual ones). Applying and interpreting a common reference document (standard, regulation…), the participants must analyze these input data independently to provide the results, which will be compared among the participants. The measurement costs are severely reduced for the participants, there is no need to monitor the scenario conditions, and almost any relevant factor can be included in this methodology

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The magnetoencephalogram (MEG) is contaminated with undesired signals, which are called artifacts. Some of the most important ones are the cardiac and the ocular artifacts (CA and OA, respectively), and the power line noise (PLN). Blind source separation (BSS) has been used to reduce the influence of the artifacts in the data. There is a plethora of BSS-based artifact removal approaches, but few comparative analyses. In this study, MEG background activity from 26 subjects was processed with five widespread BSS (AMUSE, SOBI, JADE, extended Infomax, and FastICA) and one constrained BSS (cBSS) techniques. Then, the ability of several combinations of BSS algorithm, epoch length, and artifact detection metric to automatically reduce the CA, OA, and PLN were quantified with objective criteria. The results pinpointed to cBSS as a very suitable approach to remove the CA. Additionally, a combination of AMUSE or SOBI and artifact detection metrics based on entropy or power criteria decreased the OA. Finally, the PLN was reduced by means of a spectral metric. These findings confirm the utility of BSS to help in the artifact removal for MEG background activity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CIAO is an advanced programming environment supporting Logic and Constraint programming. It offers a simple concurrent kernel on top of which declarative and non-declarative extensions are added via librarles. Librarles are available for supporting the ISOProlog standard, several constraint domains, functional and higher order programming, concurrent and distributed programming, internet programming, and others. The source language allows declaring properties of predicates via assertions, including types and modes. Such properties are checked at compile-time or at run-time. The compiler and system architecture are designed to natively support modular global analysis, with the two objectives of proving properties in assertions and performing program optimizations, including transparently exploiting parallelism in programs. The purpose of this paper is to report on recent progress made in the context of the CIAO system, with special emphasis on the capabilities of the compiler, the techniques used for supporting such capabilities, and the results in the áreas of program analysis and transformation already obtained with the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Las aplicaciones de la teledetección al seguimiento de lo que ocurre en la superficie terrestre se han ido multiplicando y afinando con el lanzamiento de nuevos sensores por parte de las diferentes agencias espaciales. La necesidad de tener información actualizada cada poco tiempo y espacialmente homogénea, ha provocado el desarrollo de nuevos programas como el Earth Observing System (EOS) de la National Aeronautics and Space Administration (NASA). Uno de los sensores que incorpora el buque insignia de ese programa, el satélite TERRA, es el Multi-angle Imaging SpectroRadiometer (MISR), diseñado para capturar información multiangular de la superficie terrestre. Ya desde los años 1970, se conocía que la reflectancia de las diversas ocupaciones y usos del suelo variaba en función del ángulo de observación y de iluminación, es decir, que eran anisotrópicas. Tal variación estaba además relacionada con la estructura tridimensional de tales ocupaciones, por lo que se podía aprovechar tal relación para obtener información de esa estructura, más allá de la que pudiera proporcionar la información meramente espectral. El sensor MISR incorpora 9 cámaras a diferentes ángulos para capturar 9 imágenes casi simultáneas del mismo punto, lo que permite estimar con relativa fiabilidad la respuesta anisotrópica de la superficie terrestre. Varios trabajos han demostrado que se pueden estimar variables relacionadas con la estructura de la vegetación con la información que proporciona MISR. En esta Tesis se ha realizado una primera aplicación a la Península Ibérica, para comprobar su utilidad a la hora de estimar variables de interés forestal. En un primer paso se ha analizado la variabilidad temporal que se produce en los datos, debido a los cambios en la geometría de captación, es decir, debido a la posición relativa de sensores y fuente de iluminación, que en este caso es el Sol. Se ha comprobado cómo la anisotropía es mayor desde finales de otoño hasta principios de primavera debido a que la posición del Sol es más cercana al plano de los sensores. También se ha comprobado que los valores máximo y mínimo se van desplazando temporalmente entre el centro y el extremo angular. En la caracterización multiangular de ocupaciones del suelo de CORINE Land Cover que se ha realizado, se puede observar cómo la forma predominante en las imágenes con el Sol más alto es convexa con un máximo en la cámara más cercana a la fuente de iluminación. Sin embargo, cuando el Sol se encuentra mucho más bajo, ese máximo es muy externo. Por otra parte, los datos obtenidos en verano son mucho más variables para cada ocupación que los de noviembre, posiblemente debido al aumento proporcional de las zonas en sombra. Para comprobar si la información multiangular tiene algún efecto en la obtención de imágenes clasificadas según ocupación y usos del suelo, se han realizado una serie de clasificaciones variando la información utilizada, desde sólo multiespectral, a multiangular y multiespectral. Los resultados muestran que, mientras para las clasificaciones más genéricas la información multiangular proporciona los peores resultados, a medida que se amplían el número de clases a obtener tal información mejora a lo obtenido únicamente con información multiespectral. Por otra parte, se ha realizado una estimación de variables cuantitativas como la fracción de cabida cubierta (Fcc) y la altura de la vegetación a partir de información proporcionada por MISR a diferentes resoluciones. En el valle de Alcudia (Ciudad Real) se ha estimado la fracción de cabida cubierta del arbolado para un píxel de 275 m utilizando redes neuronales. Los resultados muestran que utilizar información multiespectral y multiangular puede mejorar casi un 20% las estimaciones realizadas sólo con datos multiespectrales. Además, las relaciones obtenidas llegan al 0,7 de R con errores inferiores a un 10% en Fcc, siendo éstos mucho mejores que los obtenidos con el producto elaborado a partir de datos multiespectrales del sensor Moderate Resolution Imaging Spectroradiometer (MODIS), también a bordo de Terra, para la misma variable. Por último, se ha estimado la fracción de cabida cubierta y la altura efectiva de la vegetación para 700.000 ha de la provincia de Murcia, con una resolución de 1.100 m. Los resultados muestran la relación existente entre los datos espectrales y los multiangulares, obteniéndose coeficientes de Spearman del orden de 0,8 en el caso de la fracción de cabida cubierta de la vegetación, y de 0,4 en el caso de la altura efectiva. Las estimaciones de ambas variables con redes neuronales y diversas combinaciones de datos, arrojan resultados con R superiores a 0,85 para el caso del grado de cubierta vegetal, y 0,6 para la altura efectiva. Los parámetros multiangulares proporcionados en los productos elaborados con MISR a 1.100 m, no obtienen buenos resultados por sí mismos pero producen cierta mejora al incorporarlos a la información espectral. Los errores cuadráticos medios obtenidos son inferiores a 0,016 para la Fcc de la vegetación en tanto por uno, y 0,7 m para la altura efectiva de la misma. Regresiones geográficamente ponderadas muestran además que localmente se pueden obtener mejores resultados aún mejores, especialmente cuando hay una mayor variabilidad espacial de las variables estimadas. En resumen, la utilización de los datos proporcionados por MISR ofrece una prometedora vía de mejora de resultados en la media-baja resolución, tanto para la clasificación de imágenes como para la obtención de variables cuantitativas de la estructura de la vegetación. ABSTRACT Applications of remote sensing for monitoring what is happening on the land surface have been multiplied and refined with the launch of new sensors by different Space Agencies. The need of having up to date and spatially homogeneous data, has led to the development of new programs such as the Earth Observing System (EOS) of the National Aeronautics and Space Administration (NASA). One of the sensors incorporating the flagship of that program, the TERRA satellite, is Multi-angle Imaging Spectroradiometer (MISR), designed to capture the multi-angle information of the Earth's surface. Since the 1970s, it was known that the reflectance of various land covers and land uses varied depending on the viewing and ilumination angles, so they are anisotropic. Such variation was also related to the three dimensional structure of such covers, so that one could take advantage of such a relationship to obtain information from that structure, beyond which spectral information could provide. The MISR sensor incorporates 9 cameras at different angles to capture 9 almost simultaneous images of the same point, allowing relatively reliable estimates of the anisotropic response of the Earth's surface. Several studies have shown that we can estimate variables related to the vegetation structure with the information provided by this sensor, so this thesis has made an initial application to the Iberian Peninsula, to check their usefulness in estimating forest variables of interest. In a first step we analyzed the temporal variability that occurs in the data, due to the changes in the acquisition geometry, i.e. the relative position of sensor and light source, which in this case is the Sun. It has been found that the anisotropy is greater from late fall through early spring due to the Sun's position closer to the plane of the sensors. It was also found that the maximum and minimum values are displaced temporarily between the center and the ends. In characterizing CORINE Land Covers that has been done, one could see how the predominant form in the images with the highest sun is convex with a maximum in the camera closer to the light source. However, when the sun is much lower, the maximum is external. Moreover, the data obtained for each land cover are much more variable in summer that in November, possibly due to the proportional increase in shadow areas. To check whether the information has any effect on multi-angle imaging classification of land cover and land use, a series of classifications have been produced changing the data used, from only multispectrally, to multi-angle and multispectral. The results show that while for the most generic classifications multi-angle information is the worst, as there are extended the number of classes to obtain such information it improves the results. On the other hand, an estimate was made of quantitative variables such as canopy cover and vegetation height using information provided by MISR at different resolutions. In the valley of Alcudia (Ciudad Real), we estimated the canopy cover of trees for a pixel of 275 m by using neural networks. The results showed that using multispectral and multiangle information can improve by almost 20% the estimates that only used multispectral data. Furthermore, the relationships obtained reached an R coefficient of 0.7 with errors below 10% in canopy cover, which is much better result than the one obtained using data from the Moderate Resolution Imaging Spectroradiometer (MODIS), also onboard Terra, for the same variable. Finally we estimated the canopy cover and the effective height of the vegetation for 700,000 hectares in the province of Murcia, with a spatial resolution of 1,100 m. The results show a relationship between the spectral and the multi-angle data, and provide estimates of the canopy cover with a Spearman’s coefficient of 0.8 in the case of the vegetation canopy cover, and 0.4 in the case of the effective height. The estimates of both variables using neural networks and various combinations of data, yield results with an R coefficient greater than 0.85 for the case of the canopy cover, and 0.6 for the effective height. Multi-angle parameters provided in the products made from MISR at 1,100 m pixel size, did not produce good results from themselves but improved the results when included to the spectral information. The mean square errors were less than 0.016 for the canopy cover, and 0.7 m for the effective height. Geographically weighted regressions also showed that locally we can have even better results, especially when there is high spatial variability of estimated variables. In summary, the use of the data provided by MISR offers a promising way of improving remote sensing performance in the low-medium spatial resolution, both for image classification and for the estimation of quantitative variables of the vegetation structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent decades, there has been an increasing interest in systems comprised of several autonomous mobile robots, and as a result, there has been a substantial amount of development in the eld of Articial Intelligence, especially in Robotics. There are several studies in the literature by some researchers from the scientic community that focus on the creation of intelligent machines and devices capable to imitate the functions and movements of living beings. Multi-Robot Systems (MRS) can often deal with tasks that are dicult, if not impossible, to be accomplished by a single robot. In the context of MRS, one of the main challenges is the need to control, coordinate and synchronize the operation of multiple robots to perform a specic task. This requires the development of new strategies and methods which allow us to obtain the desired system behavior in a formal and concise way. This PhD thesis aims to study the coordination of multi-robot systems, in particular, addresses the problem of the distribution of heterogeneous multi-tasks. The main interest in these systems is to understand how from simple rules inspired by the division of labor in social insects, a group of robots can perform tasks in an organized and coordinated way. We are mainly interested on truly distributed or decentralized solutions in which the robots themselves, autonomously and in an individual manner, select a particular task so that all tasks are optimally distributed. In general, to perform the multi-tasks distribution among a team of robots, they have to synchronize their actions and exchange information. Under this approach we can speak of multi-tasks selection instead of multi-tasks assignment, which means, that the agents or robots select the tasks instead of being assigned a task by a central controller. The key element in these algorithms is the estimation ix of the stimuli and the adaptive update of the thresholds. This means that each robot performs this estimate locally depending on the load or the number of pending tasks to be performed. In addition, it is very interesting the evaluation of the results in function in each approach, comparing the results obtained by the introducing noise in the number of pending loads, with the purpose of simulate the robot's error in estimating the real number of pending tasks. The main contribution of this thesis can be found in the approach based on self-organization and division of labor in social insects. An experimental scenario for the coordination problem among multiple robots, the robustness of the approaches and the generation of dynamic tasks have been presented and discussed. The particular issues studied are: Threshold models: It presents the experiments conducted to test the response threshold model with the objective to analyze the system performance index, for the problem of the distribution of heterogeneous multitasks in multi-robot systems; also has been introduced additive noise in the number of pending loads and has been generated dynamic tasks over time. Learning automata methods: It describes the experiments to test the learning automata-based probabilistic algorithms. The approach was tested to evaluate the system performance index with additive noise and with dynamic tasks generation for the same problem of the distribution of heterogeneous multi-tasks in multi-robot systems. Ant colony optimization: The goal of the experiments presented is to test the ant colony optimization-based deterministic algorithms, to achieve the distribution of heterogeneous multi-tasks in multi-robot systems. In the experiments performed, the system performance index is evaluated by introducing additive noise and dynamic tasks generation over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performing three-dimensional pin-by-pin full core calculations based on an improved solution of the multi-group diffusion equation is an affordable option nowadays to compute accurate local safety parameters for light water reactors. Since a transport approximation is solved, appropriate correction factors, such as interface discontinuity factors, are required to nearly reproduce the fully heterogeneous transport solution. Calculating exact pin-by-pin discontinuity factors requires the knowledge of the heterogeneous neutron flux distribution, which depends on the boundary conditions of the pin-cell as well as the local variables along the nuclear reactor operation. As a consequence, it is impractical to compute them for each possible configuration; however, inaccurate correction factors are one major source of error in core analysis when using multi-group diffusion theory. An alternative to generate accurate pin-by-pin interface discontinuity factors is to build a functional-fitting that allows incorporating the environment dependence in the computed values. This paper suggests a methodology to consider the neighborhood effect based on the Analytic Coarse-Mesh Finite Difference method for the multi-group diffusion equation. It has been applied to both definitions of interface discontinuity factors, the one based on the Generalized Equivalence Theory and the one based on Black-Box Homogenization, and for different few energy groups structures. Conclusions are drawn over the optimal functional-fitting and demonstrative results are obtained with the multi-group pin-by-pin diffusion code COBAYA3 for representative PWR configurations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Probabilistic modeling is the de�ning characteristic of estimation of distribution algorithms (EDAs) which determines their behavior and performance in optimization. Regularization is a well-known statistical technique used for obtaining an improved model by reducing the generalization error of estimation, especially in high-dimensional problems. `1-regularization is a type of this technique with the appealing variable selection property which results in sparse model estimations. In this thesis, we study the use of regularization techniques for model learning in EDAs. Several methods for regularized model estimation in continuous domains based on a Gaussian distribution assumption are presented, and analyzed from di�erent aspects when used for optimization in a high-dimensional setting, where the population size of EDA has a logarithmic scale with respect to the number of variables. The optimization results obtained for a number of continuous problems with an increasing number of variables show that the proposed EDA based on regularized model estimation performs a more robust optimization, and is able to achieve signi�cantly better results for larger dimensions than other Gaussian-based EDAs. We also propose a method for learning a marginally factorized Gaussian Markov random �eld model using regularization techniques and a clustering algorithm. The experimental results show notable optimization performance on continuous additively decomposable problems when using this model estimation method. Our study also covers multi-objective optimization and we propose joint probabilistic modeling of variables and objectives in EDAs based on Bayesian networks, speci�cally models inspired from multi-dimensional Bayesian network classi�ers. It is shown that with this approach to modeling, two new types of relationships are encoded in the estimated models in addition to the variable relationships captured in other EDAs: objectivevariable and objective-objective relationships. An extensive experimental study shows the e�ectiveness of this approach for multi- and many-objective optimization. With the proposed joint variable-objective modeling, in addition to the Pareto set approximation, the algorithm is also able to obtain an estimation of the multi-objective problem structure. Finally, the study of multi-objective optimization based on joint probabilistic modeling is extended to noisy domains, where the noise in objective values is represented by intervals. A new version of the Pareto dominance relation for ordering the solutions in these problems, namely �-degree Pareto dominance, is introduced and its properties are analyzed. We show that the ranking methods based on this dominance relation can result in competitive performance of EDAs with respect to the quality of the approximated Pareto sets. This dominance relation is then used together with a method for joint probabilistic modeling based on `1-regularization for multi-objective feature subset selection in classi�cation, where six di�erent measures of accuracy are considered as objectives with interval values. The individual assessment of the proposed joint probabilistic modeling and solution ranking methods on datasets with small-medium dimensionality, when using two di�erent Bayesian classi�ers, shows that comparable or better Pareto sets of feature subsets are approximated in comparison to standard methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present ARGoS, a novel open source multi-robot simulator. The main design focus of ARGoS is the real-time simulation of large heterogeneous swarms of robots. Existing robot simulators obtain scalability by imposing limitations on their extensibility and on the accuracy of the robot models. By contrast, in ARGoS we pursue a deeply modular approach that allows the user both to easily add custom features and to allocate computational resources where needed by the experiment. A unique feature of ARGoS is the possibility to use multiple physics engines of different types and to assign them to different parts of the environment. Robots can migrate from one engine to another transparently. This feature enables entirely novel classes of optimizations to improve scalability and paves the way for a new approach to parallelism in robotics simulation. Results show that ARGoS can simulate about 10,000 simple wheeled robots 40% faster than real-time.