982 resultados para Artificial Information Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Informative Bayesian priors can improve the precision of estimates in ecological studies or estimate parameters for which little or no information is available. While Bayesian analyses are becoming more popular in ecology, the use of strongly informative priors remains rare, perhaps because examples of informative priors are not readily available in the published literature.
2. Dispersal distance is an important ecological parameter, but is difficult to measure and estimates are scarce. General models that provide informative prior estimates of dispersal distances will therefore be valuable.
3. Using a world-wide data set on birds, we develop a predictive model of median natal dispersal distance that includes body mass, wingspan, sex and feeding guild. This model predicts median dispersal distance well when using the fitted data and an independent test data set, explaining up to 53% of the variation.
4. Using this model, we predict a priori estimates of median dispersal distance for 57 woodland-dependent bird species in northern Victoria, Australia. These estimates are then used to investigate the relationship between dispersal ability and vulnerability to landscape-scale changes in habitat cover and fragmentation.
5. We find evidence that woodland bird species with poor predicted dispersal ability are more vulnerable to habitat fragmentation than those species with longer predicted dispersal distances, thus improving the understanding of this important phenomenon.
6. The value of constructing informative priors from existing information is also demonstrated. When used as informative priors for four example species, predicted dispersal distances reduced the 95% credible intervals of posterior estimates of dispersal distance by 8-19%. Further, should we have wished to collect information on avian dispersal distances and relate it to species' responses to habitat loss and fragmentation, data from 221 individuals across 57 species would have been required to obtain estimates with the same precision as those provided by the general model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims to establish, train, validate, and test artificial neural network (ANN) models for modelling risk allocation decision-making process in public-private partnership (PPP) projects, mainly drawing upon transaction cost economics. An industry-wide questionnaire survey was conducted to examine the risk allocation practice in PPP projects and collect the data for training the ANN models. The training and evaluation results, when compared with those of using traditional MLR modelling technique, show that the ANN models are satisfactory for modelling risk allocation decision-making process. The empirical evidence further verifies that it is appropriate to utilize transaction cost economics to interpret risk allocation decision-making process. It is recommended that, in addition to partners' risk management mechanism maturity level, decision-makers, both from public and private sectors, should also seriously consider influential factors including partner's risk management routines, partners' cooperation history, partners' risk management commitment, and risk management environmental uncertainty. All these factors influence the formation of optimal risk allocation strategies, either by their individual or interacting effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Artificial skins exhibit different mechanical properties in compare to natural skins. This drawback makes physical interaction with artificial skins to be different from natural skin. Increasing the performance of the artificial skins for robotic hands and medical applications is addressed in the present paper. The idea is to add active controls within artificial skins in order to improve their dynamic or static behaviors. This directly results into more interactivity of the artificial skins. To achieve this goal, a piece-wise linear anisotropic model for artificial skins is derived. Then a model of matrix of capacitive MEMS actuators for the control purpose is coupled with the model of artificial skin. Next an active surface shaping control is applied through the control of the capacitive MEMS actuators which shapes the skin with zero error and in a desired time. A simulation study is presented to validate the idea of using MEMS actuator for active artificial skins. In the simulation, we actively control 128 capacitive micro actuators for an artificial fingertip. The fingertip provides the required shape in a required time which means the dynamics of the skin is improved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One approach to the detection of curves at subpixel accuracy involves the reconstruction of such features from subpixel edge data points. A new technique is presented for reconstructing and segmenting curves with subpixel accuracy using deformable models. A curve is represented as a set of interconnected Hermite splines forming a snake generated from the subpixel edge information that minimizes the global energy functional integral over the set. While previous work on the minimization was mostly based on the Euler-Lagrange transformation, the authors use the finite element method to solve the energy minimization equation. The advantages of this approach over the Euler-Lagrange transformation approach are that the method is straightforward, leads to positive m-diagonal symmetric matrices, and has the ability to cope with irregular geometries such as junctions and corners. The energy functional integral solved using this method can also be used to segment the features by searching for the location of the maxima of the first derivative of the energy over the elementary curve set.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a methodology for determining the shape and ultimately the functionality of objects from intensity images; 2D analytic functions are used to track 3D features during known camera motions. Three analytic functions are proposed that describe the relationship between pairs of points that are either stationary or moving depending on whether the points are on occluding boundaries or otherwise. Many of the problems of correspondence are reduced by using foveation, known camera motion, and active vision methods. The three analytic functions are shown to enable hypothesis refinement of the functionality of a number of 3D objects without full 3D information about the shape.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a model for space in which an autonomous agent acquires information about its environment. The agent uses a predefined exploration strategy to build a map allowing it to navigate and deduce relationships between points in space. The shapes of objects in the environment are represented qualitatively. This shape information is deduced from the agent's motion. Normally, in a qualitative model, directional information degrades under transitive deduction. By reasoning about the shape of the environment, the agent can match visual events to points on the objects. This strengthens the model by allowing further relationships to be deduced. In particular, points that are separated by long distances, or complex surfaces, can be related by line-of-sight. These relationships are deduced without incorporating any metric information into the model. Examples are given to demonstrate the use of the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous studies in speculative prefetching focus on building and evaluating access models for the purpose of access prediction. This paper investigates a complementary area which has been largely ignored, that of performance modelling. We use improvement in access time as the performance metric, for which we derive a formula in terms of resource parameters (time available and time required for prefetching) and speculative parameters (probabilities for next access). The performance maximization problem is expressed as a stretch knapsack problem. We develop an algorithm to maximize the improvement in access time by solving the stretch knapsack problem, using theoretically proven apparatus to reduce the search space. Integration between speculative prefetching and caching is also investigated, albeit under the assumption of equal item sizes.