956 resultados para Probabilistic robotics
Resumo:
The free hardware platforms have become very important in engineering education in recent years. Among these platforms, Arduino highlights, characterized by its versatility, popularity and low price. This paper describes the implementation of four laboratory experiments for Automatic Control and Robotics courses at the University of Alicante, which have been developed based on Arduino and other existing equipment. Results were evaluated taking into account the views of students, concluding that the proposed experiments have been attractive to them, and they have acquired the knowledge about hardware configuration and programming that was intended.
Resumo:
For non-negative random variables with finite means we introduce an analogous of the equilibrium residual-lifetime distribution based on the quantile function. This allows us to construct new distributions with support (0, 1), and to obtain a new quantile-based version of the probabilistic generalization of Taylor's theorem. Similarly, for pairs of stochastically ordered random variables we come to a new quantile-based form of the probabilistic mean value theorem. The latter involves a distribution that generalizes the Lorenz curve. We investigate the special case of proportional quantile functions and apply the given results to various models based on classes of distributions and measures of risk theory. Motivated by some stochastic comparisons, we also introduce the “expected reversed proportional shortfall order”, and a new characterization of random lifetimes involving the reversed hazard rate function.
Resumo:
In the long term, productivity and especially productivity growth are necessary conditions for the survival of a farm. This paper focuses on the technology choice of a dairy farm, i.e. the choice between a conventional and an automatic milking system. Its aim is to reveal the extent to which economic rationality explains investing in new technology. The adoption of robotics is further linked to farm productivity to show how capital-intensive technology has affected the overall productivity of milk production. The empirical analysis applies a probit model and an extended Cobb-Douglas-type production function to a Finnish farm-level dataset for the years 2000–10. The results show that very few economic factors on a dairy farm or in its economic environment can be identified to affect the switch to automatic milking. Existing machinery capital and investment allowances are among the significant factors. The results also indicate that the probability of investing in robotics responds elastically to a change in investment aids: an increase of 1% in aid would generate an increase of 2% in the probability of investing. Despite the presence of non-economic incentives, the switch to robotic milking is proven to promote productivity development on dairy farms. No productivity growth is observed on farms that keep conventional milking systems, whereas farms with robotic milking have a growth rate of 8.1% per year. The mean rate for farms that switch to robotic milking is 7.0% per year. The results show great progress in productivity growth, with the average of the sector at around 2% per year during the past two decades. In conclusion, investments in new technology as well as investment aids to boost investments are needed in low-productivity areas where investments in new technology still have great potential to increase productivity, and thus profitability and competitiveness, in the long run.
Resumo:
Desde o início do crescente interesse na área de robótica que a navegação autónoma se apresenta como um problema de complexa resolução que, por isso, desperta vasto interesse no meio científico. Além disso, as capacidades da navegação autónoma aliadas à robótica permitem o desenvolvimento de variadas aplicações. O objectivo da navegação autónoma é conferir, a um dispositivo motor, capacidade de decisão relativa à locomoção. Para o efeito, utilizam-se sensores, como os sensores IMU, o receptor GPS e os encoders, para fornecer os dados essenciais à navegação. A dificuldade encontra-se no correcto processamento destes sinais uma vez que são susceptíveis a fontes de ruído. Este trabalho apresenta um sistema de navegação autónomo aplicado ao controlo de um robot. Para tal, desenvolveu-se uma aplicação que alberga todo o sistema de localização, navegação e controlo, acrescido de uma interface gráfica, que permite a visualização em mapa da movimentação autónoma do robot. Recorre-se ao Filtro de Kalman como método probabilístico de estimação de posição, em que os sinais dos vários sensores são conjugados e filtrados. Foram realizados vários testes de modo a avaliar a capacidade do robot atingir os pontos traçados e a sua autonomia no seguimento da trajectória pretendida.
Resumo:
The marine cycle of calcium carbonate (CaCO3) is an important element of the carbon cycle and co-governs the distribution of carbon and alkalinity within the ocean. However, CaCO3 export fluxes and mechanisms governing CaCO3 dissolution are highly uncertain. We present an observationally constrained, probabilistic assessment of the global and regional CaCO3 budgets. Parameters governing pelagic CaCO3 export fluxes and dissolution rates are sampled using a Monte Carlo scheme to construct a 1000-member ensemble with the Bern3D ocean model. Ensemble results are constrained by comparing simulated and observation-based fields of excess dissolved calcium carbonate (TA*). The minerals calcite and aragonite are modelled explicitly and ocean–sediment fluxes are considered. For local dissolution rates, either a strong or a weak dependency on CaCO3 saturation is assumed. In addition, there is the option to have saturation-independent dissolution above the saturation horizon. The median (and 68 % confidence interval) of the constrained model ensemble for global biogenic CaCO3 export is 0.90 (0.72–1.05) Gt C yr−1, that is within the lower half of previously published estimates (0.4–1.8 Gt C yr−1). The spatial pattern of CaCO3 export is broadly consistent with earlier assessments. Export is large in the Southern Ocean, the tropical Indo–Pacific, the northern Pacific and relatively small in the Atlantic. The constrained results are robust across a range of diapycnal mixing coefficients and, thus, ocean circulation strengths. Modelled ocean circulation and transport timescales for the different set-ups were further evaluated with CFC11 and radiocarbon observations. Parameters and mechanisms governing dissolution are hardly constrained by either the TA* data or the current compilation of CaCO3 flux measurements such that model realisations with and without saturation-dependent dissolution achieve skill. We suggest applying saturation-independent dissolution rates in Earth system models to minimise computational costs.
Resumo:
Mode of access: Internet.
Resumo:
Thesis (M. S.)--University of Illinois at Urbana-Champaign.
Resumo:
Vita.
Resumo:
Mode of access: Internet.
Resumo:
Rocks used as construction aggregate in temperate climates deteriorate to differing degrees because of repeated freezing and thawing. The magnitude of the deterioration depends on the rock's properties. Aggregate, including crushed carbonate rock, is required to have minimum geotechnical qualities before it can be used in asphalt and concrete. In order to reduce chances of premature and expensive repairs, extensive freeze-thaw tests are conducted on potential construction rocks. These tests typically involve 300 freeze-thaw cycles and can take four to five months to complete. Less time consuming tests that (1) predict durability as well as the extended freeze-thaw test or that (2) reduce the number of rocks subject to the extended test, could save considerable amounts of money. Here we use a probabilistic neural network to try and predict durability as determined by the freeze-thaw test using four rock properties measured on 843 limestone samples from the Kansas Department of Transportation. Modified freeze-thaw tests and less time consuming specific gravity (dry), specific gravity (saturated), and modified absorption tests were conducted on each sample. Durability factors of 95 or more as determined from the extensive freeze-thaw tests are viewed as acceptable—rocks with values below 95 are rejected. If only the modified freeze-thaw test is used to predict which rocks are acceptable, about 45% are misclassified. When 421 randomly selected samples and all four standardized and scaled variables were used to train aprobabilistic neural network, the rate of misclassification of 422 independent validation samples dropped to 28%. The network was trained so that each class (group) and each variable had its own coefficient (sigma). In an attempt to reduce errors further, an additional class was added to the training data to predict durability values greater than 84 and less than 98, resulting in only 11% of the samples misclassified. About 43% of the test data was classed by the neural net into the middle group—these rocks should be subject to full freeze-thaw tests. Thus, use of the probabilistic neural network would meanthat the extended test would only need be applied to 43% of the samples, and 11% of the rocks classed as acceptable would fail early.
Resumo:
Network building and exchange of information by people within networks is crucial to the innovation process. Contrary to older models, in social networks the flow of information is noncontinuous and nonlinear. There are critical barriers to information flow that operate in a problematic manner. New models and new analytic tools are needed for these systems. This paper introduces the concept of virtual circuits and draws on recent concepts of network modelling and design to introduce a probabilistic switch theory that can be described using matrices. It can be used to model multistep information flow between people within organisational networks, to provide formal definitions of efficient and balanced networks and to describe distortion of information as it passes along human communication channels. The concept of multi-dimensional information space arises naturally from the use of matrices. The theory and the use of serial diagonal matrices have applications to organisational design and to the modelling of other systems. It is hypothesised that opinion leaders or creative individuals are more likely to emerge at information-rich nodes in networks. A mathematical definition of such nodes is developed and it does not invariably correspond with centrality as defined by early work on networks.
Resumo:
Evolutionary algorithms perform optimization using a population of sample solution points. An interesting development has been to view population-based optimization as the process of evolving an explicit, probabilistic model of the search space. This paper investigates a formal basis for continuous, population-based optimization in terms of a stochastic gradient descent on the Kullback-Leibler divergence between the model probability density and the objective function, represented as an unknown density of assumed form. This leads to an update rule that is related and compared with previous theoretical work, a continuous version of the population-based incremental learning algorithm, and the generalized mean shift clustering framework. Experimental results are presented that demonstrate the dynamics of the new algorithm on a set of simple test problems.
Resumo:
Deregulations and market practices in power industry have brought great challenges to the system planning area. In particular, they introduce a variety of uncertainties to system planning. New techniques are required to cope with such uncertainties. As a promising approach, probabilistic methods are attracting more and more attentions by system planners. In small signal stability analysis, generation control parameters play an important role in determining the stability margin. The objective of this paper is to investigate power system state matrix sensitivity characteristics with respect to system parameter uncertainties with analytical and numerical approaches and to identify those parameters have great impact on system eigenvalues, therefore, the system stability properties. Those identified parameter variations need to be investigated with priority. The results can be used to help Regional Transmission Organizations (RTOs) and Independent System Operators (ISOs) perform planning studies under the open access environment.
Resumo:
We consider the statistical problem of catalogue matching from a machine learning perspective with the goal of producing probabilistic outputs, and using all available information. A framework is provided that unifies two existing approaches to producing probabilistic outputs in the literature, one based on combining distribution estimates and the other based on combining probabilistic classifiers. We apply both of these to the problem of matching the HI Parkes All Sky Survey radio catalogue with large positional uncertainties to the much denser SuperCOSMOS catalogue with much smaller positional uncertainties. We demonstrate the utility of probabilistic outputs by a controllable completeness and efficiency trade-off and by identifying objects that have high probability of being rare. Finally, possible biasing effects in the output of these classifiers are also highlighted and discussed.
Resumo:
Background: The structure of proteins may change as a result of the inherent flexibility of some protein regions. We develop and explore probabilistic machine learning methods for predicting a continuum secondary structure, i.e. assigning probabilities to the conformational states of a residue. We train our methods using data derived from high-quality NMR models. Results: Several probabilistic models not only successfully estimate the continuum secondary structure, but also provide a categorical output on par with models directly trained on categorical data. Importantly, models trained on the continuum secondary structure are also better than their categorical counterparts at identifying the conformational state for structurally ambivalent residues. Conclusion: Cascaded probabilistic neural networks trained on the continuum secondary structure exhibit better accuracy in structurally ambivalent regions of proteins, while sustaining an overall classification accuracy on par with standard, categorical prediction methods.