996 resultados para DISTANCE MEASUREMENTS


Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the main consequences of habitat loss and fragmentation is the increase in patch isolation and the consequent decrease in landscape connectivity. In this context, species persistence depends on their responses to this new landscape configuration, particularly on their capacity to move through the interhabitat matrix. Here, we aimed first to determine gap-crossing probabilities related to different gap widths for two forest birds (Thamnophilus caerulescens, Thamnophilidae, and Basileuterus culicivorus, Parulidae) from the Brazilian Atlantic rainforest. These values were defined with a playback technique and then used in analyses based on graph theory to determine functional connections among forest patches. Both species were capable of crossing forest gaps between patches, and these movements were related to gap width. The probability of crossing 40 m gaps was 50% for both species. This probability falls to 10% when the gaps are 60 m (for B. culicivorus) or 80 m (for T caerulescens). Actually, birds responded to stimulation about two times more distant inside forest trials (control) than in gap-crossing trials. Models that included gap-crossing capacity improved the explanatory power of species abundance variation in comparison to strictly structural models based merely on patch area and distance measurements. These results highlighted that even very simple functional connectivity measurements related to gap-crossing capacity can improve the understanding of the effect of habitat fragmentation on bird occurrence and abundance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The current cosmological dark sector (dark matter plus dark energy) is challenging our comprehension about the physical processes taking place in the Universe. Recently, some authors tried to falsify the basic underlying assumptions of such dark matterdark energy paradigm. In this Letter, we show that oversimplifications of the measurement process may produce false positives to any consistency test based on the globally homogeneous and isotropic ? cold dark matter (?CDM) model and its expansion history based on distance measurements. In particular, when local inhomogeneity effects due to clumped matter or voids are taken into account, an apparent violation of the basic assumptions (Copernican Principle) seems to be present. Conversely, the amplitude of the deviations also probes the degree of reliability underlying the phenomenological DyerRoeder procedure by confronting its predictions with the accuracy of the weak lensing approach. Finally, a new method is devised to reconstruct the effects of the inhomogeneities in a ?CDM model, and some suggestions of how to distinguish between clumpiness (or void) effects from different cosmologies are discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We have conducted a program of trigonometric distance measurements to 13 members of the TW Hydrae Association (TWA), which will enable us (through back-tracking methods) to derive a convincing estimate of the age of the association, independent of stellar evolutionary models. With age, distance, and luminosity known for an ensemble of TWA stars and brown dwarfs, models of early stellar evolution (which are still uncertain for young ages and substellar masses) will then be constrained by observations over a wide range of masses (0.025 to 0.7 M⊙).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper studies the problem of determining the position of beacon nodes in Local Positioning Systems (LPSs), for which there are no inter-beacon distance measurements available and neither the mobile node nor any of the stationary nodes have positioning or odometry information. The common solution is implemented using a mobile node capable of measuring its distance to the stationary beacon nodes within a sensing radius. Many authors have implemented heuristic methods based on optimization algorithms to solve the problem. However, such methods require a good initial estimation of the node positions in order to find the correct solution. In this paper we present a new method to calculate the inter-beacon distances, and hence the beacons positions, based in the linearization of the trilateration equations into a closed-form solution which does not require any approximate initial estimation. The simulations and field evaluations show a good estimation of the beacon node positions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Energy policies around the world are mandating for a progressive increase in renewable energy production. Extensive grassland areas with low productivity and land use limitations have become target areas for sustainable energy production to avoid competition with food production on the limited available arable land resources and minimize further conversion of grassland into intensively managed energy cropping systems or abandonment. However, the high spatio-temporal variability in botanical composition and biochemical parameters is detrimental to reliable assessment of biomass yield and quality regarding anaerobic digestion. In an approach to assess the performance for predicting biomass using a multi-sensor combination including NIRS, ultra-sonic distance measurements and LAI-2000, biweekly sensor measurements were taken on a pure stand of reed canary grass (Phalaris aruninacea), a legume grass mixture and a diversity mixture with thirty-six species in an experimental extensive two cut management system. Different combinations of the sensor response values were used in multiple regression analysis to improve biomass predictions compared to exclusive sensors. Wavelength bands for sensor specific NDVI-type vegetation indices were selected from the hyperspectral data and evaluated for the biomass prediction as exclusive indices and in combination with LAI and ultra-sonic distance measurements. Ultrasonic sward height was the best to predict biomass in single sensor approaches (R² 0.73 – 0.76). The addition of LAI-2000 improved the prediction performance by up to 30% while NIRS barely improved the prediction performance. In an approach to evaluate broad based prediction of biochemical parameters relevant for anaerobic digestion using hyperspectral NIRS, spectroscopic measurements were taken on biomass from the Jena-Experiment plots in 2008 and 2009. Measurements were conducted on different conditions of the biomass including standing sward, hay and silage and different spectroscopic devices to simulate different preparation and measurement conditions along the process chain for biogas production. Best prediction results were acquired for all constituents at laboratory measurement conditions with dried and ground samples on a bench-top NIRS system (RPD > 3) with a coefficient of determination R2 < 0.9. The same biomass was further used in batch fermentation to analyse the impact of species richness and functional group composition on methane yields using whole crop digestion and pressfluid derived by the Integrated generation of solid Fuel and Biogas from Biomass (IFBB) procedure. Although species richness and functional group composition were largely insignificant, the presence of grasses and legumes in the mixtures were most determining factors influencing methane yields in whole crop digestion. High lignocellulose content and a high C/N ratio in grasses may have reduced the digestibility in the first cut material, excess nitrogen may have inhibited methane production in second cut legumes, while batch experiments proved superior specific methane yields of IFBB press fluids and showed that detrimental effects of the parent material were reduced by the technical treatment

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Simultaneous Localization and Mapping (SLAM) is a procedure used to determine the location of a mobile vehicle in an unknown environment, while constructing a map of the unknown environment at the same time. Mobile platforms, which make use of SLAM algorithms, have industrial applications in autonomous maintenance, such as the inspection of flaws and defects in oil pipelines and storage tanks. A typical SLAM consists of four main components, namely, experimental setup (data gathering), vehicle pose estimation, feature extraction, and filtering. Feature extraction is the process of realizing significant features from the unknown environment such as corners, edges, walls, and interior features. In this work, an original feature extraction algorithm specific to distance measurements obtained through SONAR sensor data is presented. This algorithm has been constructed by combining the SONAR Salient Feature Extraction Algorithm and the Triangulation Hough Based Fusion with point-in-polygon detection. The reconstructed maps obtained through simulations and experimental data with the fusion algorithm are compared to the maps obtained with existing feature extraction algorithms. Based on the results obtained, it is suggested that the proposed algorithm can be employed as an option for data obtained from SONAR sensors in environment, where other forms of sensing are not viable. The algorithm fusion for feature extraction requires the vehicle pose estimation as an input, which is obtained from a vehicle pose estimation model. For the vehicle pose estimation, the author uses sensor integration to estimate the pose of the mobile vehicle. Different combinations of these sensors are studied (e.g., encoder, gyroscope, or encoder and gyroscope). The different sensor fusion techniques for the pose estimation are experimentally studied and compared. The vehicle pose estimation model, which produces the least amount of error, is used to generate inputs for the feature extraction algorithm fusion. In the experimental studies, two different environmental configurations are used, one without interior features and another one with two interior features. Numerical and experimental findings are discussed. Finally, the SLAM algorithm is implemented along with the algorithms for feature extraction and vehicle pose estimation. Three different cases are experimentally studied, with the floor of the environment intentionally altered to induce slipping. Results obtained for implementations with and without SLAM are compared and discussed. The present work represents a step towards the realization of autonomous inspection platforms for performing concurrent localization and mapping in harsh environments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent years, we have witnessed the growth of the Internet of Things paradigm, with its increased pervasiveness in our everyday lives. The possible applications are diverse: from a smartwatch able to measure heartbeat and communicate it to the cloud, to the device that triggers an event when we approach an exhibit in a museum. Present in many of these applications is the Proximity Detection task: for instance the heartbeat could be measured only when the wearer is near to a well defined location for medical purposes or the touristic attraction must be triggered only if someone is very close to it. Indeed, the ability of an IoT device to sense the presence of other devices nearby and calculate the distance to them can be considered the cornerstone of various applications, motivating research on this fundamental topic. The energy constraints of the IoT devices are often in contrast with the needs of continuous operations to sense the environment and to achieve high accurate distance measurements from the neighbors, thus making the design of Proximity Detection protocols a challenging task.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND: Pericard 6 (P6) is one of the most frequently used acupuncture points, especially in preventing nausea and vomiting. At this point, the median nerve is located very superficially. OBJECTIVES: To investigate the distance between the needle tip and the median nerve during acupuncture at P6, we conducted a prospective observational ultrasound (US) imaging study. We tested the hypothesis that de qi (a sensation that is typical of acupuncture needling) is evoked when the needle comes into contact with the epineural tissue and thereby prevents nerve penetration. SETTINGS/LOCATION: The outpatient pain clinic of the Medical University of Vienna, Austria. SUBJECTS: Fifty (50) patients receiving acupuncture treatment including P6 bilaterally. INTERVENTIONS: Patients were examined at both forearms using US (a 10-MHz linear transducer) after insertion of the needle at P6. OUTCOME MEASURES: The distance between the needle tip and the median nerve, the number of nerve contacts and nerve penetrations, as well as the number of successfully elicited de qi sensations were recorded. RESULTS: Complete data could be obtained from 97 cases. The mean distance from the needle tip to the nerve was 1.8 mm (standard deviation 2.2; range 0-11.3). Nerve contacts were recorded in 52 cases, in 14 of which the nerve was penetrated by the needle. De qi was elicited in 85 cases. We found no association between the number of nerve contacts and de qi. The 1-week follow-up showed no complications or neurologic problems. CONCLUSIONS: This is the first investigation demonstrating the relationship between acupuncture needle placement and adjacent neural structures using US technology. The rate of median nerve penetrations by the acupuncture needle at P6 was surprisingly high, but these seemed to carry no risk of neurologic sequelae. De qi at P6 does not depend on median nerve contact, nor does it prevent median nerve penetration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. Observations in the cosmological domain are heavily dependent on the validity of the cosmic distance-duality (DD) relation, eta = D(L)(z)(1+ z)(2)/D(A)(z) = 1, an exact result required by the Etherington reciprocity theorem where D(L)(z) and D(A)(z) are, respectively, the luminosity and angular diameter distances. In the limit of very small redshifts D(A)(z) = D(L)(z) and this ratio is trivially satisfied. Measurements of Sunyaev-Zeldovich effect (SZE) and X-rays combined with the DD relation have been used to determine D(A)(z) from galaxy clusters. This combination offers the possibility of testing the validity of the DD relation, as well as determining which physical processes occur in galaxy clusters via their shapes. Aims. We use WMAP (7 years) results by fixing the conventional Lambda CDM model to verify the consistence between the validity of DD relation and different assumptions about galaxy cluster geometries usually adopted in the literature. Methods. We assume that. is a function of the redshift parametrized by two different relations: eta(z) = 1+eta(0)z, and eta(z) = 1+eta(0)z/(1+z), where eta(0) is a constant parameter quantifying the possible departure from the strict validity of the DD relation. In order to determine the probability density function (PDF) of eta(0), we consider the angular diameter distances from galaxy clusters recently studied by two different groups by assuming elliptical (isothermal) and spherical (non-isothermal) beta models. The strict validity of the DD relation will occur only if the maximum value of eta(0) PDF is centered on eta(0) = 0. Results. It was found that the elliptical beta model is in good agreement with the data, showing no violation of the DD relation (PDF peaked close to eta(0) = 0 at 1 sigma), while the spherical (non-isothermal) one is only marginally compatible at 3 sigma. Conclusions. The present results derived by combining the SZE and X-ray surface brightness data from galaxy clusters with the latest WMAP results (7-years) favors the elliptical geometry for galaxy clusters. It is remarkable that a local property like the geometry of galaxy clusters might be constrained by a global argument provided by the cosmic DD relation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this Letter, we propose a new and model-independent cosmological test for the distance-duality (DD) relation, eta = D(L)(z)(1 + z)(-2)/D(A)(z) = 1, where D(L) and D(A) are, respectively, the luminosity and angular diameter distances. For D(L) we consider two sub-samples of Type Ia supernovae (SNe Ia) taken from Constitution data whereas D(A) distances are provided by two samples of galaxy clusters compiled by De Filippis et al. and Bonamente et al. by combining Sunyaev-Zeldovich effect and X-ray surface brightness. The SNe Ia redshifts of each sub-sample were carefully chosen to coincide with the ones of the associated galaxy cluster sample (Delta z < 0.005), thereby allowing a direct test of the DD relation. Since for very low redshifts, D(A)(z) approximate to D(L)(z), we have tested the DD relation by assuming that. is a function of the redshift parameterized by two different expressions: eta(z) = 1 + eta(0)z and eta(z) = 1 +eta(0)z/(1 + z), where eta(0) is a constant parameter quantifying a possible departure from the strict validity of the reciprocity relation (eta(0) = 0). In the best scenario (linear parameterization), we obtain eta(0) = -0.28(-0.44)(+0.44) (2 sigma, statistical + systematic errors) for the De Filippis et al. sample (elliptical geometry), a result only marginally compatible with the DD relation. However, for the Bonamente et al. sample (spherical geometry) the constraint is eta(0) = -0.42(-0.34)(+0.34) (3 sigma, statistical + systematic errors), which is clearly incompatible with the duality-distance relation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: The aim of this study was to assess the influence of irradiation distance and the use of cooling in the Er:YAG laser efficacy in preventing enamel demineralization. Methods: 84 enamel blocks were randomly assigned to seven groups (n = 12): G1: control group - no treatment, G2-G7: experimental groups treated with Er:YAG laser (80 mJ/2 Hz) at different irradiation distances with or without cooling: G2: 4 mm/2 mL; G3: 4 mm/no cooling; G4: 8 mm/2 mL; G5: 8 mm/no cooling; G6: 16 mm/2 mL; G7: 16 mm/no cooling. The samples were submitted to an in vitro pH cycles for 14 days. Next, the specimens were sectioned in sections of 80-100 mu m in thickness and the demineralization patterns of prepared slices were assessed using a polarized light microscope. Three samples from each group were analyzed with scanning electronic microscopy. Analysis of variance and the Fisher test were performed for the statistical analysis of the data obtained from the caries-lesion-depth measurements (CLDM) (alpha = 5%). Results: The control group (CLDM = 0.67 mm) was statistically different from group 2 (CLDM = 0.42 mm), which presented a smaller lesion depth, and group 6 (0.91 mm), which presented a greater lesion depth. The results of groups 3 (CLDM = 0.74 mm), 4 (CLDM = 0.70 mm), 5 (CLDM = 0.67 mm) and 7 (CLDM = 0.89 mm) presented statistical similarity. The scanning electronic microscopy analysis showed ablation areas in the samples from groups 4, 5, 6 and 7, and a slightly demineralized area in group 2. Conclusions: It was possible to conclude that Er:YAG laser was efficient in preventing enamel demineralization at a 4-mm irradiation distance using cooling. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This review aims to identify strategies to optimise radiography practice using digital technologies, for full spine studies on paediatrics focusing particularly on methods used to diagnose and measure severity of spinal curvatures. The literature search was performed on different databases (PubMed, Google Scholar and ScienceDirect) and relevant websites (e.g., American College of Radiology and International Commission on Radiological Protection) to identify guidelines and recent studies focused on dose optimisation in paediatrics using digital technologies. Plain radiography was identified as the most accurate method. The American College of Radiology (ACR) and European Commission (EC) provided two guidelines that were identified as the most relevant to the subject. The ACR guidelines were updated in 2014; however these guidelines do not provide detailed guidance on technical exposure parameters. The EC guidelines are more complete but are dedicated to screen film systems. Other studies provided reviews on the several exposure parameters that should be included for optimisation, such as tube current, tube voltage and source-to-image distance; however, only explored few of these parameters and not all of them together. One publication explored all parameters together but this was for adults only. Due to lack of literature on exposure parameters for paediatrics, more research is required to guide and harmonise practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: Optimise a set of exposure factors, with the lowest effective dose, to delineate spinal curvature with the modified Cobb method in a full spine using computed radiography (CR) for a 5-year-old paediatric anthropomorphic phantom. Methods: Images were acquired by varying a set of parameters: positions (antero-posterior (AP), posteroanterior (PA) and lateral), kilo-voltage peak (kVp) (66-90), source-to-image distance (SID) (150 to 200cm), broad focus and the use of a grid (grid in/out) to analyse the impact on E and image quality (IQ). IQ was analysed applying two approaches: objective [contrast-to-noise-ratio/(CNR] and perceptual, using 5 observers. Monte-Carlo modelling was used for dose estimation. Cohen’s Kappa coefficient was used to calculate inter-observer-variability. The angle was measured using Cobb’s method on lateral projections under different imaging conditions. Results: PA promoted the lowest effective dose (0.013 mSv) compared to AP (0.048 mSv) and lateral (0.025 mSv). The exposure parameters that allowed lower dose were 200cm SID, 90 kVp, broad focus and grid out for paediatrics using an Agfa CR system. Thirty-seven images were assessed for IQ and thirty-two were classified adequate. Cobb angle measurements varied between 16°±2.9 and 19.9°±0.9. Conclusion: Cobb angle measurements can be performed using the lowest dose with a low contrast-tonoise ratio. The variation on measurements for this was ±2.9° and this is within the range of acceptable clinical error without impact on clinical diagnosis. Further work is recommended on improvement to the sample size and a more robust perceptual IQ assessment protocol for observers.