832 resultados para Best Approximation


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Context. On 12 November 2014 the European mission Rosetta succeeded in delivering a lander, named Philae, on the surface of one of the smallest, low-gravity and most primitive bodies of the solar system, the comet 67P/Churyumov-Gerasimenko (67P). Aims. The aim of this paper is to provide a comprehensive geomorphological and spectrophotometric analysis of Philae's landing site (Agilkia) to give an essential framework for the interpretation of its in situ measurements. Methods. OSIRIS images, coupled with gravitational slopes derived from the 3D shape model based on stereo-photogrammetry were used to interpret the geomorphology of the site. We adopted the Hapke model, using previously derived parameters, to photometrically correct the images in orange filter (649.2 nm). The best approximation to the Hapke model, given by the Akimov parameter-less function, was used to correct the reflectance for the effects of viewing and illumination conditions in the other filters. Spectral analyses on coregistered color cubes were used to retrieve spectrophotometric properties. Results. The landing site shows an average normal albedo of 6.7% in the orange filter with variations of similar to 15% and a global featureless spectrum with an average red spectral slope of 15.2%/100 nm between 480.7 nm (blue filter) and 882.1 nm (near-IR filter). The spatial analysis shows a well-established correlation between the geomorphological units and the photometric characteristics of the surface. In particular, smooth deposits have the highest reflectance a bluer spectrum than the outcropping material across the area. Conclusions. The featureless spectrum and the redness of the material are compatible with the results by other instruments that have suggested an organic composition. The observed small spectral variegation could be due to grain size effects. However, the combination of photometric and spectral variegation suggests that a compositional differentiation is more likely. This might be tentatively interpreted as the effect of the efficient dust-transport processes acting on 67P. High-activity regions might be the original sources for smooth fine-grained materials that then covered Agilkia as a consequence of airfall of residual material. More observations performed by OSIRIS as the comet approaches the Sun would help interpreting the processes that work at shaping the landing site and the overall nucleus.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, modernized shipborne procedures are presented to collect and process above-water radiometry for remote sensing applications. A setup of five radiometers and a bidirectional camera system, which provides panoramic sea surface and sky images, is proposed for the collection of high-resolution radiometric quantities. Images from the camera system can be used to determine sky state and potential glint, whitecaps, or foam contamination. A peak in the observed remote sensing reflectance RRS spectra between 750-780 nm was typically found in spectra with relatively high surface reflected glint (SRG), which suggests this waveband could be a useful SRG indicator. Simplified steps for computing uncertainties in SRG corrected RRS are proposed and discussed. The potential of utilizing "unweighted multimodel averaging," which is the average of four or more common SRG correction models, is examined to determine the best approximation RRS. This best approximation RRS provides an estimate of RRS based on various SRG correction models established using radiative transfer simulations and field investigations. Applying the average RRS provides a measure of the inherent uncertainties or biases that result from a user subjectively choosing any one SRG correction model. Comparisons between inherent and apparent optical property derived observations were used to assess the robustness of the SRG multimodel averaging ap- proach. Correlations among the standard SRG models were completed to determine the degree of association or similarities between the SRG models. Results suggest that the choice of glint models strongly affects derived RRS values and can also influence the blue to green band ratios used for modeling biogeochemical parameters such as for chlorophyll a. The objective here is to present a uniform and traceable methodology for determining ship- borne RRS measurements and its associated errors due to glint correction and to ensure the direct comparability of these measurements in future investigations. We encourage the ocean color community to publish radiometric field measurements with matching and complete metadata in open access repositories.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we present different error measurements with the aim to evaluate the quality of the approximations generated by the GNG3D method for mesh simplification. The first phase of this method consists on the execution of the GNG3D algorithm, described in the paper. The primary goal of this phase is to obtain a simplified set of vertices representing the best approximation of the original 3D object. In the reconstruction phase we use the information provided by the optimization algorithm to reconstruct the faces thus obtaining the optimized mesh. The implementation of three error functions, named Eavg, Emax, Esur, permitts us to control the error of the simplified model, as it is shown in the examples studied.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we present a study of the computational cost of the GNG3D algorithm for mesh optimization. This algorithm has been implemented taking as a basis a new method which is based on neural networks and consists on two differentiated phases: an optimization phase and a reconstruction phase. The optimization phase is developed applying an optimization algorithm based on the Growing Neural Gas model, which constitutes an unsupervised incremental clustering algorithm. The primary goal of this phase is to obtain a simplified set of vertices representing the best approximation of the original 3D object. In the reconstruction phase we use the information provided by the optimization algorithm to reconstruct the faces thus obtaining the optimized mesh. The computational cost of both phases is calculated, showing some examples.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Large amounts of information can be overwhelming and costly to process, especially when transmitting data over a network. A typical modern Geographical Information System (GIS) brings all types of data together based on the geographic component of the data and provides simple point-and-click query capabilities as well as complex analysis tools. Querying a Geographical Information System, however, can be prohibitively expensive due to the large amounts of data which may need to be processed. Since the use of GIS technology has grown dramatically in the past few years, there is now a need more than ever, to provide users with the fastest and least expensive query capabilities, especially since an approximated 80 % of data stored in corporate databases has a geographical component. However, not every application requires the same, high quality data for its processing. In this paper we address the issues of reducing the cost and response time of GIS queries by preaggregating data by compromising the data accuracy and precision. We present computational issues in generation of multi-level resolutions of spatial data and show that the problem of finding the best approximation for the given region and a real value function on this region, under a predictable error, in general is "NP-complete.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 46B70, 41A10, 41A25, 41A27, 41A35, 41A36, 42A10.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper suggests a simple method based on Chebyshev approximation at Chebyshev nodes to approximate partial differential equations. The methodology simply consists in determining the value function by using a set of nodes and basis functions. We provide two examples. Pricing an European option and determining the best policy for chatting down a machinery. The suggested method is flexible, easy to program and efficient. It is also applicable in other fields, providing efficient solutions to complex systems of partial differential equations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we present the formulas for the calculation of exact three-center electron sharing indices (3c-ESI) and introduce two new approximate expressions for correlated wave functions. The 3c-ESI uses the third-order density, the diagonal of the third-order reduced density matrix, but the approximations suggested in this work only involve natural orbitals and occupancies. In addition, the first calculations of 3c-ESI using Valdemoro's, Nakatsuji's and Mazziotti's approximation for the third-order reduced density matrix are also presented for comparison. Our results on a test set of molecules, including 32 3c-ESI values, prove that the new approximation based on the cubic root of natural occupancies performs the best, yielding absolute errors below 0.07 and an average absolute error of 0.015. Furthemore, this approximation seems to be rather insensitive to the amount of electron correlation present in the system. This newly developed methodology provides a computational inexpensive method to calculate 3c-ESI from correlated wave functions and opens new avenues to approximate high-order reduced density matrices in other contexts, such as the contracted Schrödinger equation and the anti-Hermitian contracted Schrödinger equation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Étant donnée une fonction bornée (supérieurement ou inférieurement) $f:\mathbb{N}^k \To \Real$ par une expression mathématique, le problème de trouver les points extrémaux de $f$ sur chaque ensemble fini $S \subset \mathbb{N}^k$ est bien défini du point de vu classique. Du point de vue de la théorie de la calculabilité néanmoins il faut éviter les cas pathologiques où ce problème a une complexité de Kolmogorov infinie. La principale restriction consiste à définir l'ordre, parce que la comparaison entre les nombres réels n'est pas décidable. On résout ce problème grâce à une structure qui contient deux algorithmes, un algorithme d'analyse réelle récursive pour évaluer la fonction-coût en arithmétique à précision infinie et un autre algorithme qui transforme chaque valeur de cette fonction en un vecteur d'un espace, qui en général est de dimension infinie. On développe trois cas particuliers de cette structure, un de eux correspondant à la méthode d'approximation de Rauzy. Finalement, on établit une comparaison entre les meilleures approximations diophantiennes simultanées obtenues par la méthode de Rauzy (selon l'interprétation donnée ici) et une autre méthode, appelée tétraédrique, que l'on introduit à partir de l'espace vectoriel engendré par les logarithmes de nombres premiers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The computation of a piecewise smooth function that approximates a finite set of data points may be decomposed into two decoupled tasks: first, the computation of the locally smooth models, and hence, the segmentation of the data into classes that consist on the sets of points best approximated by each model, and second, the computation of the normalized discriminant functions for each induced class. The approximating function may then be computed as the optimal estimator with respect to this measure field. We give an efficient procedure for effecting both computations, and for the determination of the optimal number of components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the English literature, facial approximation methods have been commonly classified into three types: Russian, American, or Combination. These categorizations are based on the protocols used, for example, whether methods use average soft-tissue depths (American methods) or require face muscle construction (Russian methods). However, literature searches outside the usual realm of English publications reveal key papers that demonstrate that the Russian category above has been founded on distorted views. In reality, Russian methods are based on limited face muscle construction, with heavy reliance on modified average soft-tissue depths. A closer inspection of the American method also reveals inconsistencies with the recognized classification scheme. This investigation thus demonstrates that all major methods of facial approximation depend on both face anatomy and average soft-tissue depths, rendering common method classification schemes redundant. The best way forward appears to be for practitioners to describe the methods they use (including the weight each one gives to average soft-tissue depths and deep face tissue construction) without placing them in any categorical classificatory group or giving them an ambiguous name. The state of this situation may need to be reviewed in the future in light of new research results and paradigms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Congenital Nystagmus (CN) is an ocular-motor disorder characterised by involuntary, conjugated ocular oscillations, and its pathogenesis is still unknown. The pathology is de fined as "congenital" from the onset time of its arise which could be at birth or in the first months of life. Visual acuity in CN subjects is often diminished due to nystagmus continuous oscillations, mainly on the horizontal plane, which disturb image fixation on the retina. However, during short periods in which eye velocity slows down while the target image is placed onto the fovea (called foveation intervals) the image of a given target can still be stable, allowing a subject to reach a higher visual acuity. In CN subjects, visual acuity is usually assessed both using typical measurement techniques (e.g. Landolt C test) and with eye movement recording in different gaze positions. The offline study of eye movement recordings allows physicians to analyse nystagmus main features such as waveform shape, amplitude and frequency and to compute estimated visual acuity predictors. This analytical functions estimates the best corrected visual acuity using foveation time and foveation position variability, hence a reliable estimation of this two parameters is a fundamental factor in assessing visual acuity. This work aims to enhance the foveation time estimation in CN eye movement recording, computing a second order approximation of the slow phase components of nystag-mus oscillations. About 19 infraredoculographic eye-movement recordings from 10 CN subjects were acquired and the visual acuity assessed with an acuity predictor was compared to the one measured in primary position. Results suggest that visual acuity measurements based on foveation time estimation obtained from interpolated data are closer to value obtained during Landolt C tests. © 2010 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Ophira Mini Sling System involves anchoring a midurethral, low-tension tape to the obturator internus muscles bilaterally at the level of the tendinous arc. Success rates in different subsets of patients are still to be defined. This work aims to identify which factors influence the 2-year outcomes of this treatment. Analysis was based on data from a multicenter study. Endpoints for analysis included objective measurements: 1-h pad-weight (PWT), and cough stress test (CST), and questionnaires: International Consultation on Incontinence Questionnaire-Short Form (ICIQ-SF) and Urinary Distress Inventory (UDI)-6. A logistic regression analysis evaluated possible risk factors for failure. In all, 124 female patients with stress urinary incontinence (SUI) underwent treatment with the Ophira procedure. All patients completed 1 year of follow-up, and 95 complied with the 2-year evaluation. Longitudinal analysis showed no significant differences between results at 1 and 2 years. The 2-year overall objective results were 81 (85.3%) patients dry, six (6.3%) improved, and eight (8.4%) incontinent. A multivariate analysis revealed that previous anti-incontinence surgery was the only factor that significantly influenced surgical outcomes. Two years after treatment, women with previous failed surgeries had an odds ratio (OR) for treatment failure (based on PWT) of 4.0 [95% confidence interval (CI) 1.02-15.57). The Ophira procedure is an effective option for SUI treatment, with durable good results. Previous surgeries were identified as the only significant risk factor, though previously operated patients showed an acceptable success rate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider a random medium consisting of N points randomly distributed so that there is no correlation among the distances separating them. This is the random link model, which is the high dimensionality limit (mean-field approximation) for the Euclidean random point structure. In the random link model, at discrete time steps, a walker moves to the nearest point, which has not been visited in the last mu steps (memory), producing a deterministic partially self-avoiding walk (the tourist walk). We have analytically obtained the distribution of the number n of points explored by the walker with memory mu=2, as well as the transient and period joint distribution. This result enables us to explain the abrupt change in the exploratory behavior between the cases mu=1 (memoryless walker, driven by extreme value statistics) and mu=2 (walker with memory, driven by combinatorial statistics). In the mu=1 case, the mean newly visited points in the thermodynamic limit (N >> 1) is just < n >=e=2.72... while in the mu=2 case, the mean number < n > of visited points grows proportionally to N(1/2). Also, this result allows us to establish an equivalence between the random link model with mu=2 and random map (uncorrelated back and forth distances) with mu=0 and the abrupt change between the probabilities for null transient time and subsequent ones.