984 resultados para Unknown Point
Resumo:
Data augmentation is a powerful technique for estimating models with latent or missing data, but applications in agricultural economics have thus far been few. This paper showcases the technique in an application to data on milk market participation in the Ethiopian highlands. There, a key impediment to economic development is an apparently low rate of market participation. Consequently, economic interest centers on the “locations” of nonparticipants in relation to the market and their “reservation values” across covariates. These quantities are of policy interest because they provide measures of the additional inputs necessary in order for nonparticipants to enter the market. One quantity of primary interest is the minimum amount of surplus milk (the “minimum efficient scale of operations”) that the household must acquire before market participation becomes feasible. We estimate this quantity through routine application of data augmentation and Gibbs sampling applied to a random-censored Tobit regression. Incorporating random censoring affects markedly the marketable-surplus requirements of the household, but only slightly the covariates requirements estimates and, generally, leads to more plausible policy estimates than the estimates obtained from the zero-censored formulation
Resumo:
Abstract not available
Resumo:
This paper proposes a novel framework to construct a geometric and photometric model of a viewed object that can be used for visualisation in arbitrary pose and illumination. The method is solely based on images and does not require any specialised equipment. We assume that the object has a piece-wise smooth surface and that its reflectance can be modelled using a parametric bidirectional reflectance distribution function. Without assuming any prior knowledge on the object, geometry and reflectance have to be estimated simultaneously and occlusion and shadows have to be treated consistently. We exploit the geometric and photometric consistency using the fact that surface orientation and reflectance are local invariants. In a first implementation, we demonstrate the method using a Lambertian object placed on a turn-table and illuminated by a number of unknown point light-sources. A discrete voxel model is initialised to the visual hull and voxels identified as inconsistent with the invariants are removed iteratively. The resulting model is used to render images in novel pose and illumination. © 2004 Elsevier B.V. All rights reserved.
Resumo:
The goal of this paper is to present a methodology for quality control of horizontal geodetic networks through robustness and covariance analysis. In the proposed methodology, the positional accuracy of each point is estimated by a possible bias in their position (based on robustness analysis), in addition to its own positional precision (uncertainty) (through covariance analysis), being a measure independently from the choice of the datum. Besides presenting the theoretical development of the method, its application is demonstrated in a numerical example. The results indicate that, in general, the greater the distance of an unknown point to the control(s) point(s) of the network, the greater is the propagation of random errors on this unknown point, and the smaller the number of redundant observations around a unknown point, the greater the influence of possible (undetected) non-random errors on this point.
Resumo:
We demonstrate that the process of generating smooth transitions Call be viewed as a natural result of the filtering operations implied in the generation of discrete-time series observations from the sampling of data from an underlying continuous time process that has undergone a process of structural change. In order to focus discussion, we utilize the problem of estimating the location of abrupt shifts in some simple time series models. This approach will permit its to address salient issues relating to distortions induced by the inherent aggregation associated with discrete-time sampling of continuous time processes experiencing structural change, We also address the issue of how time irreversible structures may be generated within the smooth transition processes. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
The flood flow in urbanised areas constitutes a major hazard to the population and infrastructure as seen during the summer 2010-2011 floods in Queensland (Australia). Flood flows in urban environments have been studied relatively recently, although no study considered the impact of turbulence in the flow. During the 12-13 January 2011 flood of the Brisbane River, some turbulence measurements were conducted in an inundated urban environment in Gardens Point Road next to Brisbane's central business district (CBD) at relatively high frequency (50 Hz). The properties of the sediment flood deposits were characterised and the acoustic Doppler velocimeter unit was calibrated to obtain both instantaneous velocity components and suspended sediment concentration in the same sampling volume with the same temporal resolution. While the flow motion in Gardens Point Road was subcritical, the water elevations and velocities fluctuated with a distinctive period between 50 and 80 s. The low frequency fluctuations were linked with some local topographic effects: i.e, some local choke induced by an upstream constriction between stairwells caused some slow oscillations with a period close to the natural sloshing period of the car park. The instantaneous velocity data were analysed using a triple decomposition, and the same triple decomposition was applied to the water depth, velocity flux, suspended sediment concentration and suspended sediment flux data. The velocity fluctuation data showed a large energy component in the slow fluctuation range. For the first two tests at z = 0.35 m, the turbulence data suggested some isotropy. At z = 0.083 m, on the other hand, the findings indicated some flow anisotropy. The suspended sediment concentration (SSC) data presented a general trend with increasing SSC for decreasing water depth. During a test (T4), some long -period oscillations were observed with a period about 18 minutes. The cause of these oscillations remains unknown to the authors. The last test (T5) took place in very shallow waters and high suspended sediment concentrations. It is suggested that the flow in the car park was disconnected from the main channel. Overall the flow conditions at the sampling sites corresponded to a specific momentum between 0.2 to 0.4 m2 which would be near the upper end of the scale for safe evacuation of individuals in flooded areas. But the authors do not believe the evacuation of individuals in Gardens Point Road would have been safe because of the intense water surges and flow turbulence. More generally any criterion for safe evacuation solely based upon the flow velocity, water depth or specific momentum cannot account for the hazards caused by the flow turbulence, water depth fluctuations and water surges.
Resumo:
In this paper, the recent results of the space project IMPERA are presented. The goal of IMPERA is the development of a multirobot planning and plan execution architecture with a focus on a lunar sample collection scenario in an unknown environment. We describe the implementation and verification of different modules that are integrated into a distributed system architecture. The modules include a mission planning approach for a multirobot system and modules for task and skill execution within a lunar use-case scenario. The skills needed for the test scenario include cooperative exploration and mapping strategies for an unknown environment, the localization and classification of sample containers using a novel approach of semantic perception, and the skill of transporting sample containers to a collection point using a mobile manipulation robot. Additionally, we present our approach of a reliable communication framework that can deal with communication loss during the mission. Several modules are tested within several experiments in the domain of planning and plan execution, communication, coordinated exploration, perception, and object transportation. An overall system integration is tested on a mission scenario experiment using three robots.
Cooperative choice and its framing effect under threshold uncertainty in a provision point mechanism
Resumo:
This paper explores how threshold uncertainty affects cooperative behaviors in the provision of public goods and the prevention of public bads. The following facts motivate our study. First, environmental (resource) problems are either framed as public bads prevention or public goods provision. Second, the occurrence of these problems is characterized by thresholds that are interchangeably represented as "nonconvexity," "bifurcation," "bi-stability," or "catastrophes." Third, the threshold location is mostly unknown. We employ a provision point mechanism with threshold uncertainty and analyze the responses of cooperative behaviors to uncertainty and to the framing for each type of social preferences categorized by a value orientation test. We find that aggregate framing effects are negligible, although the response to the frame is the opposite depending on the type of social preferences. "Cooperative" subjects become more cooperative in negative frames than in positive frames, whereas "individualistic" subjects are less cooperative in negative frames than in positive ones. This finding implies that the insignificance of aggregate framing effects arises from behavioral asymmetry. We also find that the percentage of cooperative choices non-monotonically varies with the degree of threshold uncertainty, irrespective of framing and value orientation. Specifically, the degree of cooperation is highest at intermediate levels of threshold uncertainty and decreases as the uncertainty becomes sufficiently large.
Resumo:
Consider N points in R-d and M local coordinate systems that are related through unknown rigid transforms. For each point, we are given (possibly noisy) measurements of its local coordinates in some of the coordinate systems. Alternatively, for each coordinate system, we observe the coordinates of a subset of the points. The problem of estimating the global coordinates of the N points (up to a rigid transform) from such measurements comes up in distributed approaches to molecular conformation and sensor network localization, and also in computer vision and graphics. The least-squares formulation of this problem, although nonconvex, has a well-known closed-form solution when M = 2 (based on the singular value decomposition (SVD)). However, no closed-form solution is known for M >= 3. In this paper, we demonstrate how the least-squares formulation can be relaxed into a convex program, namely, a semidefinite program (SDP). By setting up connections between the uniqueness of this SDP and results from rigidity theory, we prove conditions for exact and stable recovery for the SDP relaxation. In particular, we prove that the SDP relaxation can guarantee recovery under more adversarial conditions compared to earlier proposed spectral relaxations, and we derive error bounds for the registration error incurred by the SDP relaxation. We also present results of numerical experiments on simulated data to confirm the theoretical findings. We empirically demonstrate that (a) unlike the spectral relaxation, the relaxation gap is mostly zero for the SDP (i.e., we are able to solve the original nonconvex least-squares problem) up to a certain noise threshold, and (b) the SDP performs significantly better than spectral and manifold-optimization methods, particularly at large noise levels.
Resumo:
Background: In the post-genomic era where sequences are being determined at a rapid rate, we are highly reliant on computational methods for their tentative biochemical characterization. The Pfam database currently contains 3,786 families corresponding to ``Domains of Unknown Function'' (DUF) or ``Uncharacterized Protein Family'' (UPF), of which 3,087 families have no reported three-dimensional structure, constituting almost one-fourth of the known protein families in search for both structure and function. Results: We applied a `computational structural genomics' approach using five state-of-the-art remote similarity detection methods to detect the relationship between uncharacterized DUFs and domain families of known structures. The association with a structural domain family could serve as a start point in elucidating the function of a DUF. Amongst these five methods, searches in SCOP-NrichD database have been applied for the first time. Predictions were classified into high, medium and low-confidence based on the consensus of results from various approaches and also annotated with enzyme and Gene ontology terms. 614 uncharacterized DUFs could be associated with a known structural domain, of which high confidence predictions, involving at least four methods, were made for 54 families. These structure-function relationships for the 614 DUF families can be accessed on-line at http://proline.biochem.iisc.ernet.in/RHD_DUFS/. For potential enzymes in this set, we assessed their compatibility with the associated fold and performed detailed structural and functional annotation by examining alignments and extent of conservation of functional residues. Detailed discussion is provided for interesting assignments for DUF3050, DUF1636, DUF1572, DUF2092 and DUF659. Conclusions: This study provides insights into the structure and potential function for nearly 20 % of the DUFs. Use of different computational approaches enables us to reliably recognize distant relationships, especially when they converge to a common assignment because the methods are often complementary. We observe that while pointers to the structural domain can offer the right clues to the function of a protein, recognition of its precise functional role is still `non-trivial' with many DUF domains conserving only some of the critical residues. It is not clear whether these are functional vestiges or instances involving alternate substrates and interacting partners. Reviewers: This article was reviewed by Drs Eugene Koonin, Frank Eisenhaber and Srikrishna Subramanian.
Resumo:
This thesis is a theoretical work on the space-time dynamic behavior of a nuclear reactor without feedback. Diffusion theory with G-energy groups is used.
In the first part the accuracy of the point kinetics (lumped-parameter description) model is examined. The fundamental approximation of this model is the splitting of the neutron density into a product of a known function of space and an unknown function of time; then the properties of the system can be averaged in space through the use of appropriate weighting functions; as a result a set of ordinary differential equations is obtained for the description of time behavior. It is clear that changes of the shape of the neutron-density distribution due to space-dependent perturbations are neglected. This results to an error in the eigenvalues and it is to this error that bounds are derived. This is done by using the method of weighted residuals to reduce the original eigenvalue problem to that of a real asymmetric matrix. Then Gershgorin-type theorems .are used to find discs in the complex plane in which the eigenvalues are contained. The radii of the discs depend on the perturbation in a simple manner.
In the second part the effect of delayed neutrons on the eigenvalues of the group-diffusion operator is examined. The delayed neutrons cause a shifting of the prompt-neutron eigenvalue s and the appearance of the delayed eigenvalues. Using a simple perturbation method this shifting is calculated and the delayed eigenvalues are predicted with good accuracy.
Resumo:
We introduce a new learning problem: learning a graph by piecemeal search, in which the learner must return every so often to its starting point (for refueling, say). We present two linear-time piecemeal-search algorithms for learning city-block graphs: grid graphs with rectangular obstacles.
Resumo:
Under normal viewing conditions, humans find it easy to distinguish between objects made out of different materials such as plastic, metal, or paper. Untextured materials such as these have different surface reflectance properties, including lightness and gloss. With single isolated images and unknown illumination conditions, the task of estimating surface reflectance is highly underconstrained, because many combinations of reflection and illumination are consistent with a given image. In order to work out how humans estimate surface reflectance properties, we asked subjects to match the appearance of isolated spheres taken out of their original contexts. We found that subjects were able to perform the task accurately and reliably without contextual information to specify the illumination. The spheres were rendered under a variety of artificial illuminations, such as a single point light source, and a number of photographically-captured real-world illuminations from both indoor and outdoor scenes. Subjects performed more accurately for stimuli viewed under real-world patterns of illumination than under artificial illuminations, suggesting that subjects use stored assumptions about the regularities of real-world illuminations to solve the ill-posed problem.
Resumo:
The method of approximate approximations, introduced by Maz'ya [1], can also be used for the numerical solution of boundary integral equations. In this case, the matrix of the resulting algebraic system to compute an approximate source density depends only on the position of a finite number of boundary points and on the direction of the normal vector in these points (Boundary Point Method). We investigate this approach for the Stokes problem in the whole space and for the Stokes boundary value problem in a bounded convex domain G subset R^2, where the second part consists of three steps: In a first step the unknown potential density is replaced by a linear combination of exponentially decreasing basis functions concentrated near the boundary points. In a second step, integration over the boundary partial G is replaced by integration over the tangents at the boundary points such that even analytical expressions for the potential approximations can be obtained. In a third step, finally, the linear algebraic system is solved to determine an approximate density function and the resulting solution of the Stokes boundary value problem. Even not convergent the method leads to an efficient approximation of the form O(h^2) + epsilon, where epsilon can be chosen arbitrarily small.