964 resultados para Point Data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes an algorithm for constructing the solid model (boundary representation) from pout data measured from the faces of the object. The poznt data is assumed to be clustered for each face. This algorithm does not require any compuiier model of the part to exist and does not require any topological infarmation about the part to be input by the user. The property that a convex solid can be constructed uniquely from geometric input alone is utilized in the current work. Any object can be represented a5 a combznatzon of convex solids. The proposed algorithm attempts to construct convex polyhedra from the given input. The polyhedra so obtained are then checked against the input data for containment and those polyhedra, that satisfy this check, are combined (using boolean union operation) to realise the solid model. Results of implementation are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Flash points (T(FP)) of hydrocarbons are calculated from their flash point numbers, N(FP), with the relationship T(FP) (K) = 23.369N(FP)(2/3) + 20.010N(FP)(1/3) + 31.901 In turn, the N(FP) values can be predicted from experimental boiling point numbers (Y(BP)) and molecular structure with the equation N(FP) = 0.987 Y(BP) + 0.176D + 0.687T + 0.712B - 0.176 where D is the number of olefinic double bonds in the structure, T is the number of triple bonds, and B is the number of aromatic rings. For a data set consisting of 300 diverse hydrocarbons, the average absolute deviation between the literature and predicted flash points was 2.9 K.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Constructing a 3D surface model from sparse-point data is a nontrivial task. Here, we report an accurate and robust approach for reconstructing a surface model of the proximal femur from sparse-point data and a dense-point distribution model (DPDM). The problem is formulated as a three-stage optimal estimation process. The first stage, affine registration, is to iteratively estimate a scale and a rigid transformation between the mean surface model of the DPDM and the sparse input points. The estimation results of the first stage are used to establish point correspondences for the second stage, statistical instantiation, which stably instantiates a surface model from the DPDM using a statistical approach. This surface model is then fed to the third stage, kernel-based deformation, which further refines the surface model. Handling outliers is achieved by consistently employing the least trimmed squares (LTS) approach with a roughly estimated outlier rate in all three stages. If an optimal value of the outlier rate is preferred, we propose a hypothesis testing procedure to automatically estimate it. We present here our validations using four experiments, which include 1 leave-one-out experiment, 2 experiment on evaluating the present approach for handling pathology, 3 experiment on evaluating the present approach for handling outliers, and 4 experiment on reconstructing surface models of seven dry cadaver femurs using clinically relevant data without noise and with noise added. Our validation results demonstrate the robust performance of the present approach in handling outliers, pathology, and noise. An average 95-percentile error of 1.7-2.3 mm was found when the present approach was used to reconstruct surface models of the cadaver femurs from sparse-point data with noise added.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Harmful algal blooms (HABs) are a significant and potentially expanding problem around the world. Resource management and public health protection require sufficient information to reduce the impacts of HABs by response strategies and through warnings and advisories. To be effective, these programs can best be served by an integration of improved detection methods with both evolving monitoring systems and new communications capabilities. Data sets are typically collected from a variety of sources, these can be considered as several types: point data, such as water samples; transects, such as from shipboard continuous sampling; and synoptic, such as from satellite imagery. Generation of a field of the HAB distribution requires all of these sampling approaches. This means that the data sets need to be interpreted and analyzed with each other to create the field or distribution of the HAB. The HAB field is also a necessary input into models that forecast blooms. Several systems have developed strategies that demonstrate these approaches. These range from data sets collected at key sites, such as swimming beaches, to automated collection systems, to integration of interpreted satellite data. Improved data collection, particularly in speed and cost, will be one of the advances of the next few years. Methods to improve creation of the HAB field from the variety of data types will be necessary for routine nowcasting and forecasting of HABs.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We report a novel method for calculating flash points of acyclic alkanes from flash point numbers, N(FP), which can be calculated from experimental or calculated boiling point numbers (Y(BP)) with the equation N(FP) = 1.020Y(BP) - 1.083 Flash points (FP) are then determined from the relationship FP(K) = 23.369N(FP)(2/3) + 20.010N(FP)(1/3) + 31.901 For it data set of 102 linear and branched alkanes, the correlation of literature and predicted flash points has R(2) = 0.985 and an average absolute deviation of 3.38 K. N(FP) values can also be estimated directly from molecular structure to produce an even closer correspondence of literature and predicted FP values. Furthermore, N(FP) values provide a new method to evaluate the reliability of literature flash point data.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In recent years, the use of Reverse Engineering systems has got a considerable interest for a wide number of applications. Therefore, many research activities are focused on accuracy and precision of the acquired data and post processing phase improvements. In this context, this PhD Thesis deals with the definition of two novel methods for data post processing and data fusion between physical and geometrical information. In particular a technique has been defined for error definition in 3D points’ coordinates acquired by an optical triangulation laser scanner, with the aim to identify adequate correction arrays to apply under different acquisition parameters and operative conditions. Systematic error in data acquired is thus compensated, in order to increase accuracy value. Moreover, the definition of a 3D thermogram is examined. Object geometrical information and its thermal properties, coming from a thermographic inspection, are combined in order to have a temperature value for each recognizable point. Data acquired by an optical triangulation laser scanner are also used to normalize temperature values and make thermal data independent from thermal-camera point of view.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Glacier thickness is an important factor in the course of glacier retreat in a warming climate. Thiese study data presents the results (point data) of GPR surveys on 66 Austrian mountain glaciers carried out between 1995 and 2014. The glacier areas range from 0.001 to 18.4 km**2, and their ice thickness has been surveyed with an average density of 36 points/km**2 . The glacier areas and surface elevations refer to the second Austrian glacier inventory (mapped between 1996 and 2002). According to the glacier state recorded in the second glacier inventory, the 64 glaciers cover an area of 223.3±3.6 km**3. Maps of glacier thickness have been calculated by Fischer and Kuhn (2013) with a mean thickness of 50±3 m and contain an glacier volume of 11.9±1.1 km**3. The mean maximum ice thickness is 119±5 m. The ice thickness measurements have been carried out with the transmitter of Narod and Clarke (1994) combined with restively loaded dipole antennas (Wu and King, 1965; Rose and Vickers, 1974) at central wavelengths of 6.5 (30 m antenna length) and 4.0 MHz (50 m antenna length). The signal was recorded trace by trace with an oscilloscope. 168 m/µs as used by Haeberli et al. (1982), Bauder (2001), and Narod and Clarke (1994), the signal velocity in air is assumed to be 300 m/µs. Details on the method can be are found in Fischer and Kuhn (2013), as well as Span et al. (2005) and Fischer et al. (2007).

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Measures have been developed to understand tendencies in the distribution of economic activity. The merits of these measures are in the convenience of data collection and processing. In this interim report, investigating the property of such measures to determine the geographical spread of economic activities, we summarize the merits and limitations of measures, and make clear that we must apply caution in their usage. As a first trial to access areal data, this project focus on administrative areas, not on point data and input-output data. Firm level data is not within the scope of this article. The rest of this article is organized as follows. In Section 2, we touch on the the limitations and problems associated with the measures and areal data. Specific measures are introduced in Section 3, and applied in Section 4. The conclusion summarizes the findings and discusses future work.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Data on the occurrence of species are widely used to inform the design of reserve networks. These data contain commission errors (when a species is mistakenly thought to be present) and omission errors (when a species is mistakenly thought to be absent), and the rates of the two types of error are inversely related. Point locality data can minimize commission errors, but those obtained from museum collections are generally sparse, suffer from substantial spatial bias and contain large omission errors. Geographic ranges generate large commission errors because they assume homogenous species distributions. Predicted distribution data make explicit inferences on species occurrence and their commission and omission errors depend on model structure, on the omission of variables that determine species distribution and on data resolution. Omission errors lead to identifying networks of areas for conservation action that are smaller than required and centred on known species occurrences, thus affecting the comprehensiveness, representativeness and efficiency of selected areas. Commission errors lead to selecting areas not relevant to conservation, thus affecting the representativeness and adequacy of reserve networks. Conservation plans should include an estimation of commission and omission errors in underlying species data and explicitly use this information to influence conservation planning outcomes.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Scientific applications rely heavily on floating point data types. Floating point operations are complex and require complicated hardware that is both area and power intensive. The emergence of massively parallel architectures like Rigel creates new challenges and poses new questions with respect to floating point support. The massively parallel aspect of Rigel places great emphasis on area efficient, low power designs. At the same time, Rigel is a general purpose accelerator and must provide high performance for a wide class of applications. This thesis presents an analysis of various floating point unit (FPU) components with respect to Rigel, and attempts to present a candidate design of an FPU that balances performance, area, and power and is suitable for massively parallel architectures like Rigel.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Streaming SIMD extension (SSE) is a special feature that is available in the Intel Pentium III and P4 classes of microprocessors. As its name implies, SSE enables the execution of SIMD (Single Instruction Multiple Data) operations upon 32-bit floating-point data therefore, performance of floating-point algorithms can be improved. In electrified railway system simulation, the computation involves the solving of a huge set of simultaneous linear equations, which represent the electrical characteristic of the railway network at a particular time-step and a fast solution for the equations is desirable in order to simulate the system in real-time. In this paper, we present how SSE is being applied to the railway network simulation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Discretization of a geographical region is quite common in spatial analysis. There have been few studies into the impact of different geographical scales on the outcome of spatial models for different spatial patterns. This study aims to investigate the impact of spatial scales and spatial smoothing on the outcomes of modelling spatial point-based data. Given a spatial point-based dataset (such as occurrence of a disease), we study the geographical variation of residual disease risk using regular grid cells. The individual disease risk is modelled using a logistic model with the inclusion of spatially unstructured and/or spatially structured random effects. Three spatial smoothness priors for the spatially structured component are employed in modelling, namely an intrinsic Gaussian Markov random field, a second-order random walk on a lattice, and a Gaussian field with Matern correlation function. We investigate how changes in grid cell size affect model outcomes under different spatial structures and different smoothness priors for the spatial component. A realistic example (the Humberside data) is analyzed and a simulation study is described. Bayesian computation is carried out using an integrated nested Laplace approximation. The results suggest that the performance and predictive capacity of the spatial models improve as the grid cell size decreases for certain spatial structures. It also appears that different spatial smoothness priors should be applied for different patterns of point data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The European wild rabbit has been considered Australia’s worst vertebrate pest and yet little effort appears to have gone into producing maps of rabbit distribution and density. Mapping the distribution and density of pests is an important step in effective management. A map is essential for estimating the extent of damage caused and for efficiently planning and monitoring the success of pest control operations. This paper describes the use of soil type and point data to prepare a map showing the distribution and density of rabbits in Australia. The potential for the method to be used for mapping other vertebrate pests is explored. The approach used to prepare the map is based on that used for rabbits in Queensland (Berman et al. 1998). An index of rabbit density was determined using the number of Spanish rabbit fleas released per square kilometre for each Soil Map Unit (Atlas of Australian Soils). Spanish rabbit fleas were released into active rabbit warrens at 1606 sites in the early 1990s as an additional vector for myxoma virus and the locations of the releases were recorded using a Global Positioning System (GPS). Releases were predominantly in arid areas but some fleas were released in south east Queensland and the New England Tablelands of New South Wales. The map produced appears to reflect well the distribution and density of rabbits, at least in the areas where Spanish fleas were released. Rabbit pellet counts conducted in 2007 at 54 sites across an area of south east South Australia, south eastern Queensland, and parts of New South Wales (New England Tablelands and south west) in soil Map Units where Spanish fleas were released, provided a preliminary means to ground truth the map. There was a good relationship between mean pellet count score and the index of abundance for soil Map Units. Rabbit pellet counts may allow extension of the map into other parts of Australia where there were no Spanish rabbit fleas released and where there may be no other consistent information on rabbit location and density. The recent Equine Influenza outbreak provided a further test of the value of this mapping method. The distribution and density of domestic horses were mapped to provide estimates of the number of horses in various regions. These estimates were close to the actual numbers of horses subsequently determined from vaccination records and registrations. The soil Map Units are not simply soil types they contain information on landuse and vegetation and the soil classification is relatively localised. These properties make this mapping method useful, not only for rabbits, but also for other species that are not so dependent on soil type for survival.