841 resultados para Robotic benchmarks


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Robotic mapping is the process of automatically constructing an environment representation using mobile robots. We address the problem of semantic mapping, which consists of using mobile robots to create maps that represent not only metric occupancy but also other properties of the environment. Specifically, we develop techniques to build maps that represent activity and navigability of the environment. Our approach to semantic mapping is to combine machine learning techniques with standard mapping algorithms. Supervised learning methods are used to automatically associate properties of space to the desired classification patterns. We present two methods, the first based on hidden Markov models and the second on support vector machines. Both approaches have been tested and experimentally validated in two problem domains: terrain mapping and activity-based mapping.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Navigation is a broad topic that has been receiving considerable attention from the mobile robotic community over the years. In order to execute autonomous driving in outdoor urban environments it is necessary to identify parts of the terrain that can be traversed and parts that should be avoided. This paper describes an analyses of terrain identification based on different visual information using a MLP artificial neural network and combining responses of many classifiers. Experimental tests using a vehicle and a video camera have been conducted in real scenarios to evaluate the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a new climatology of atmospheric aerosols (primarily pyrogenic and biogenic) for the Brazilian tropics on the basis of a high-quality data set of spectral aerosol optical depth and directional sky radiance measurements from Aerosol Robotic Network (AERONET) Cimel Sun-sky radiometers at more than 15 sites distributed across the Amazon basin and adjacent Cerrado region. This network is the only long-term project (with a record including observations from more than 11 years at some locations) ever to have provided ground-based remotely-sensed column aerosol properties for this critical region. Distinctive features of the Amazonian area aerosol are presented by partitioning the region into three aerosol regimes: southern Amazonian forest, Cerrado, and northern Amazonian forest. The monitoring sites generally include measurements from the interval 1999-2006, but some sites have measurement records that date back to the initial days of the AERONET program in 1993. Seasonal time series of aerosol optical depth (AOD), angstrom ngstrom exponent, and columnar-averaged microphysical properties of the aerosol derived from sky radiance inversion techniques (single-scattering albedo, volume size distribution, fine mode fraction of AOD, etc.) are described and contrasted for the defined regions. During the wet season, occurrences of mineral dust penetrating deep into the interior were observed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purple acid phosphatases (PAPs) are a group of metallohydrolases that contain a dinuclear Fe(II)M(II) center (M(II) = Fe, Mn, Zn) in the active site and are able to catalyze the hydrolysis of a variety of phosphoric acid esters. The dinuclear complex [(H(2)O)Fe(III)(mu-OH)Zn(II)(L-H)](CIO(4))(2) (2) with the ligand 2-[N-bis(2-pyridylmethyl)aminomethyl]-4-methyl-6-[N-(2-pyridylmethyl)(2-hydroxybenzyl) aminomethyl]phenol (H(2)L-H) has recently been prepared and is found to closely mimic the coordination environment of the Fe(III)Zn(II) active site found in red kidney bean PAP (Neves et al. J. Am. Chem. Soc. 2007, 129, 7486). The biomimetic shows significant catalytic activity in hydrolytic reactions. By using a variety of structural, spectroscopic, and computational techniques the electronic structure of the Fe(III) center of this biomimetic complex was determined. In the solid state the electronic ground state reflects the rhombically distorted Fe(III)N(2)O(4) octahedron with a dominant tetragonal compression align ad along the mu-OH-Fe-O(phenolate) direction. To probe the role of the Fe-O(phenolate) bond, the phenolate moiety was modified to contain electron-donating or -withdrawing groups (-CH(3), -H, -Br, -NO(2)) in the 5-position. Tie effects of the substituents on the electronic properties of the biomimetic complexes were studied with a range of experimental and computational techniques. This study establishes benchmarks against accurate crystallographic struck ral information using spectroscopic techniques that are not restricted to single crystals. Kinetic studies on the hydrolysis reaction revealed that the phosphodiesterase activity increases in the order -NO(2)<- Br <- H <- CH(3) when 2,4-bis(dinitrophenyl)phosphate (2,4-bdnpp) was used as substrate, and a linear free energy relationship is found when log(k(cat)/k(0)) is plotted against the Hammett parameter a. However, nuclease activity measurements in the cleavage of double stranded DNA showed that the complexes containing the electron-withdrawing -NO(2) and electron-donating CH3 groups are the most active while the cytotoxic activity of the biomimetics on leukemia and lung tumoral cells is highest for complexes with electron-donating groups.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The assessment of routing protocols for mobile wireless networks is a difficult task, because of the networks` dynamic behavior and the absence of benchmarks. However, some of these networks, such as intermittent wireless sensors networks, periodic or cyclic networks, and some delay tolerant networks (DTNs), have more predictable dynamics, as the temporal variations in the network topology can be considered as deterministic, which may make them easier to study. Recently, a graph theoretic model-the evolving graphs-was proposed to help capture the dynamic behavior of such networks, in view of the construction of least cost routing and other algorithms. The algorithms and insights obtained through this model are theoretically very efficient and intriguing. However, there is no study about the use of such theoretical results into practical situations. Therefore, the objective of our work is to analyze the applicability of the evolving graph theory in the construction of efficient routing protocols in realistic scenarios. In this paper, we use the NS2 network simulator to first implement an evolving graph based routing protocol, and then to use it as a benchmark when comparing the four major ad hoc routing protocols (AODV, DSR, OLSR and DSDV). Interestingly, our experiments show that evolving graphs have the potential to be an effective and powerful tool in the development and analysis of algorithms for dynamic networks, with predictable dynamics at least. In order to make this model widely applicable, however, some practical issues still have to be addressed and incorporated into the model, like adaptive algorithms. We also discuss such issues in this paper, as a result of our experience.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Developing successful navigation and mapping strategies is an essential part of autonomous robot research. However, hardware limitations often make for inaccurate systems. This project serves to investigate efficient alternatives to mapping an environment, by first creating a mobile robot, and then applying machine learning to the robot and controlling systems to increase the robustness of the robot system. My mapping system consists of a semi-autonomous robot drone in communication with a stationary Linux computer system. There are learning systems running on both the robot and the more powerful Linux system. The first stage of this project was devoted to designing and building an inexpensive robot. Utilizing my prior experience from independent studies in robotics, I designed a small mobile robot that was well suited for simple navigation and mapping research. When the major components of the robot base were designed, I began to implement my design. This involved physically constructing the base of the robot, as well as researching and acquiring components such as sensors. Implementing the more complex sensors became a time-consuming task, involving much research and assistance from a variety of sources. A concurrent stage of the project involved researching and experimenting with different types of machine learning systems. I finally settled on using neural networks as the machine learning system to incorporate into my project. Neural nets can be thought of as a structure of interconnected nodes, through which information filters. The type of neural net that I chose to use is a type that requires a known set of data that serves to train the net to produce the desired output. Neural nets are particularly well suited for use with robotic systems as they can handle cases that lie at the extreme edges of the training set, such as may be produced by "noisy" sensor data. Through experimenting with available neural net code, I became familiar with the code and its function, and modified it to be more generic and reusable for multiple applications of neural nets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study contributes a rigorous diagnostic assessment of state-of-the-art multiobjective evolutionary algorithms (MOEAs) and highlights key advances that the water resources field can exploit to better discover the critical tradeoffs constraining our systems. This study provides the most comprehensive diagnostic assessment of MOEAs for water resources to date, exploiting more than 100,000 MOEA runs and trillions of design evaluations. The diagnostic assessment measures the effectiveness, efficiency, reliability, and controllability of ten benchmark MOEAs for a representative suite of water resources applications addressing rainfall-runoff calibration, long-term groundwater monitoring (LTM), and risk-based water supply portfolio planning. The suite of problems encompasses a range of challenging problem properties including (1) many-objective formulations with 4 or more objectives, (2) multi-modality (or false optima), (3) nonlinearity, (4) discreteness, (5) severe constraints, (6) stochastic objectives, and (7) non-separability (also called epistasis). The applications are representative of the dominant problem classes that have shaped the history of MOEAs in water resources and that will be dominant foci in the future. Recommendations are provided for which modern MOEAs should serve as tools and benchmarks in the future water resources literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the formulation of a Multi-objective Pipe Smoothing Genetic Algorithm (MOPSGA) and its application to the least cost water distribution network design problem. Evolutionary Algorithms have been widely utilised for the optimisation of both theoretical and real-world non-linear optimisation problems, including water system design and maintenance problems. In this work we present a pipe smoothing based approach to the creation and mutation of chromosomes which utilises engineering expertise with the view to increasing the performance of the algorithm whilst promoting engineering feasibility within the population of solutions. MOPSGA is based upon the standard Non-dominated Sorting Genetic Algorithm-II (NSGA-II) and incorporates a modified population initialiser and mutation operator which directly targets elements of a network with the aim to increase network smoothness (in terms of progression from one diameter to the next) using network element awareness and an elementary heuristic. The pipe smoothing heuristic used in this algorithm is based upon a fundamental principle employed by water system engineers when designing water distribution pipe networks where the diameter of any pipe is never greater than the sum of the diameters of the pipes directly upstream resulting in the transition from large to small diameters from source to the extremities of the network. MOPSGA is assessed on a number of water distribution network benchmarks from the literature including some real-world based, large scale systems. The performance of MOPSGA is directly compared to that of NSGA-II with regard to solution quality, engineering feasibility (network smoothness) and computational efficiency. MOPSGA is shown to promote both engineering and hydraulic feasibility whilst attaining good infrastructure costs compared to NSGA-II.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates the impact of price limits on the Brazil- ian future markets using high frequency data. The aim is to identify whether there is a cool-off or a magnet effect. For that purpose, we examine a tick-by-tick data set that includes all contracts on the São Paulo stock index futures traded on the Brazilian Mercantile and Futures Exchange from January 1997 to December 1999. Our main finding is that price limits drive back prices as they approach the lower limit. There is a strong cool-off effect of the lower limit on the conditional mean, whereas the upper limit seems to entail a weak magnet effect on the conditional variance. We then build a trading strategy that accounts for the cool-off effect so as to demonstrate that the latter has not only statistical, but also economic signifi- cance. The resulting Sharpe ratio indeed is way superior to the buy-and-hold benchmarks we consider.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Brazilian Traction, Light and Power Company ("Light"), formada por empreendedores canadenses em 1899, operou por 80 anos praticamente toda a infraestrutura (bondes, luz, telefones, gás) do eixo Rio-São Paulo. A empresa passou por vários ciclos políticos, desde sua fundação até sua estatização em 1979. Durante este período de 80 anos, a infra-estrutura nacional, inicialmente privada, foi gradativamente passando para as mãos do Estado. O setor voltaria a ser privado a partir dos anos 90, configurando o ciclo privado-público-privado, similar ao ocorrido nos países mais desenvolvidos. A Light, símbolo maior do capital estrangeiro até os anos 50, foi inicialmente bem recebida no país, posto que seu desenvolvimento era simbiótico, causa e conseqüência, ao desenvolvimento industrial. Dos anos 20 em diante, crescem os debates econômicos ou ideológicos quanto ao papel do capital privado estrangeiro no desenvolvimento nacional, vis-a-vis a opção do setor público como ator principal. Sempre permaneceu sob névoa quais teriam sido os lucros da Light no Brasil, e se esses seriam excessivos, acima do razoável. Outra questão recorrente se refere até que ponto os congelamentos de tarifas teriam contribuido para a crise de oferta de infra-estrutura. Através de um trabalho de pesquisa em fontes primárias, esta dissertação procura reconstituir a história da Light, sob um foco de Taxa de Retorno sob o capital investido. Foi reconstruída a história financeira da Light no Brasil, a partir da qual calculou-se, para vários períodos e para os seus 80 anos de vida, os retornos obtidos pelos acionistas da empresa. A partir dos resultados obtidos, e utilizando-se de benchmarks comparativos, foi possível mostrar que: i) ao contrário da crença vigente à época, o retorno obtido pelo maior investidor estrangeiro no setor de infra-estrutura do Brasil do Séc. XX, se mostrou bem abaixo do mínimo aceitável, e ii) o represamento de tarifas, por várias décadas, foi de fato determinante para o subdesenvolvimento do setor de infra-estrutura no Brasil.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper develops background considerations to help better framing the results of a CGE exercise. Three main criticisms are usually addressed to CGE efforts. First, they are too aggregate, their conclusions failing to shed light on relevant sectors or issues. Second, they imply huge data requirements. Timeliness is frequently jeopardised by out-dated sources, benchmarks referring to realities gone by. Finally, results are meaningless, as they answer wrong or ill-posed questions. Modelling demands end up by creating a rather artificial context, where the original questions lose content. In spite of a positive outlook on the first two, crucial questions lie in the third point. After elaborating such questions, and trying to answer some, the text argues that CGE models can come closer to reality. If their use is still scarce to give way to a fruitful symbiosis between negotiations and simulation results, they remain the only available technique providing a global, inter-related way of capturing economy-wide effects of several different policies. International organisations can play a major role supporting and encouraging improvements. They are also uniquely positioned to enhance information and data sharing, as well as putting people from various origins together, to share their experiences. A serious and complex homework is however required, to correct, at least, the most dangerous present shortcomings of the technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates the impact of price limits on the Brazilian futures markets using high frequency data. The aim is to identify whether there is a cool-off or a magnet effect. For that purpose, we examine a tick-by-tick data set that includes all contracts on the S˜ao Paulo stock index futures traded on the Brazilian Mercantile and Futures Exchange from January 1997 to December 1999. The results indicate that the conditional mean features a floor cool-off effect, whereas the conditional variance significantly increases as the price approaches the upper limit. We then build a trading strategy that accounts for the cool-off effect in the conditional mean so as to demonstrate that the latter has not only statistical, but also economic significance. The in-sample Sharpe ratio indeed is way superior to the buy-and-hold benchmarks we consider, whereas out-of-sample results evince similar performances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electronic applications are currently developed under the reuse-based paradigm. This design methodology presents several advantages for the reduction of the design complexity, but brings new challenges for the test of the final circuit. The access to embedded cores, the integration of several test methods, and the optimization of the several cost factors are just a few of the several problems that need to be tackled during test planning. Within this context, this thesis proposes two test planning approaches that aim at reducing the test costs of a core-based system by means of hardware reuse and integration of the test planning into the design flow. The first approach considers systems whose cores are connected directly or through a functional bus. The test planning method consists of a comprehensive model that includes the definition of a multi-mode access mechanism inside the chip and a search algorithm for the exploration of the design space. The access mechanism model considers the reuse of functional connections as well as partial test buses, cores transparency, and other bypass modes. The test schedule is defined in conjunction with the access mechanism so that good trade-offs among the costs of pins, area, and test time can be sought. Furthermore, system power constraints are also considered. This expansion of concerns makes it possible an efficient, yet fine-grained search, in the huge design space of a reuse-based environment. Experimental results clearly show the variety of trade-offs that can be explored using the proposed model, and its effectiveness on optimizing the system test plan. Networks-on-chip are likely to become the main communication platform of systemson- chip. Thus, the second approach presented in this work proposes the reuse of the on-chip network for the test of the cores embedded into the systems that use this communication platform. A power-aware test scheduling algorithm aiming at exploiting the network characteristics to minimize the system test time is presented. The reuse strategy is evaluated considering a number of system configurations, such as different positions of the cores in the network, power consumption constraints and number of interfaces with the tester. Experimental results show that the parallelization capability of the network can be exploited to reduce the system test time, whereas area and pin overhead are strongly minimized. In this manuscript, the main problems of the test of core-based systems are firstly identified and the current solutions are discussed. The problems being tackled by this thesis are then listed and the test planning approaches are detailed. Both test planning techniques are validated for the recently released ITC’02 SoC Test Benchmarks, and further compared to other test planning methods of the literature. This comparison confirms the efficiency of the proposed methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Granting economic development incentives (or “EDIs”) has become commonplace throughout the United States, but the efficiency of these mechanisms is generally unwarranted. Both the politicians granting, and the companies seeking, EDIs have incentives to overestimate the EDIs benefits. For politicians, ribbon–cutting ceremonies can be the highly desirable opportunity to please political allies and financiers, and the same time that they demonstrate to the population that they are successful in promoting economic growth – even when the population would be better off otherwise. In turn, businesses are naturally prone to seek governmental aid. This explains in part why EDIs often “fail” (i.e. don’t pay–off). To increase transparency and mitigate the risk of EDI failure, local and state governments across the country have created a number of accountability mechanisms. The general trait of these accountability mechanisms is that they apply controls to some of the sub–risks that underlie the risk of EDI failure. These sub–risks include the companies receiving EDIs not generating the expected number of jobs, not investing enough in their local facilities, not attracting the expected additional businesses investments to the jurisdiction, etc. The problem with such schemes is that they tackle the problem of EDI failure very loosely. They are too narrow and leave multiplier effects uncontrolled. I propose novel contractual framework for implementing accountability mechanisms. My suggestion is to establish controls on the risk of EDI failure itself, leaving its underlying sub–risks uncontrolled. I call this mechanism “Contingent EDIs”, because the EDIs are made contingent on the government achieving a preset target that benchmarks the risk of EDI failure. If the target is met, the EDIs will ex post kick in; if not, then the EDIs never kick in.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabalho estuda o comportamento do retorno dos fundos de Investimento de Renda Fixa no Brasil a partir de um modelo baseado na hipótese de que os fundos de Investimento Financeiro (FIF’s) e os Fundos de Aplicação em Cotas de FIF’s (FAC’s) estão diretamente ligados às seguintes variáveis: i) Bolsa (IBOVESPA), ii) Taxa de Juros (CDI) e iii) Câmbio (Dólar). No presente estudo, são investigados basicamente dois aspectos: a) a existência de uma relação entre a variação dos indicadores financeiros e a variação do retorno dos fundos e b) a alteração dos fatores explicativos do retorno dos fundos ao longo dos meses. Os resultados obtidos mostram que a grande maioria dos fundos obteve os retornos explicados por um, dois ou três benchmarks; logo, é possível interpretar a categoria de ativos de um fundo como exposição do desempenho de seu benchmark. No entanto, foi constatado que a escolha do benchmark mais adequado para um fundo de investimento depende do conhecimento da composição da carteira desse fundo. Assim, através de uma análise criteriosa dessas informações, o investidor deve estar convicto de que a política de investimentos de um determinado fundo o enquadra em uma determinada categoria. Além disso, ele deve ter a segurança necessária e suficiente de estar classificando seu(s) fundo(s) na categoria certa, e por via de conseqüência, comparar sua performance com base em seu risco associado, de forma justa e compatível com seu semelhante.