938 resultados para Clouds of points


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective: Impaired social interactions and repetitive behavior are key features of autism spectrum disorder (ASD). In the present study we compared social decision-making in subjects with and without ASD. Subjects performed five social decision-making games in order to assess trust, fairness, cooperation & competition behavior and social value orientation. Methods: 19 adults with autism spectrum disorder and 17 controls, matched for age and education, participated in the study. Each subject performed five social decision-making tasks. In the trust game, subjects could maximize their gain by sharing some of their money with another person. In the punishment game, subjects played two versions of the Dictator’s Dilemma. In the dictator condition they could share an amount of 0-100 points with another person. In the punishment condition, the opponent was able to punish the subject if he/she was not satisfied with the amount of points received. In the cooperation game, subjects played with a small group of 3 people. Each of them could (anonymously) select an amount of 5, 7.5 or 10 Swiss francs. The goal of the game was to achieve a high group minimum. In the competition game, subjects performed a dexterity task. Before performing the task, they were asked whether they wanted to compete (winner takes it all) or cooperation (sharing the joint achieved amount of points) with a randomly selected person. Lastly, subjects performed a social value orientation task where they were playing for themselves and for another person. Results: There was no overall difference between healthy controls an ASD subjects in investment in the trust game. However, healthy controls increased their investment over number of trials whereas ASD subjects did not. A similar pattern was found for the punishment game. Furthermore, ASD subjects revealed a decreased investment in the dictator condition of the punishment game. There were no mean differences in competition behavior and social value orientation. Conclusions: The results provide evidence for differences between ASD subjects and healthy controls in social decision-making. Subjects with ASD showed a more consistent behavior than healthy controls in the trust game and the dictator dilemma. The present findings provide evidence for impaired social learning in ASD.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centers from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Validation of treatment plan quality and dose calculation accuracy is essential for new radiotherapy techniques, including volumetric modulated arc therapy (VMAT). VMAT delivers intensity modulated radiotherapy treatments while simultaneously rotating the gantry, adding an additional level of complexity to both the dose calculation and delivery of VMAT treatments compared to static gantry IMRT. The purpose of this project was to compare two VMAT systems, Elekta VMAT and Varian RapidArc, to the current standard of care, IMRT, in terms of both treatment plan quality and dosimetric delivery accuracy using the Radiological Physics Center (RPC) head and neck (H&N) phantom. Clinically relevant treatment plans were created for the phantom using typical prescription and dose constraints for Elekta VMAT (planned with Pinnacle3 Smart Arc) and RapidArc and IMRT (both planned with Eclipse). The treatment plans were evaluated to determine if they were clinically comparable using several dosimetric criteria, including ability to meet dose objectives, hot spots, conformity index, and homogeneity index. The planned treatments were delivered to the phantom and absolute doses and relative dose distributions were measured with thermoluminescent dosimeters (TLDs) and radiochromic film, respectively. The measured and calculated doses of each treatment were compared to determine if they were clinically acceptable based upon RPC criteria of ±7% dose difference and 4 mm distance-to-agreement. Gamma analysis was used to assess dosimetric accuracy, as well. All treatment plans were able to meet the dosimetric objectives set by the RPC and had similar hot spots in the normal tissue. The Elekta VMAT plan was more homogenous but less conformal than the RapidArc and IMRT plans. When comparing the measured and calculated doses, all plans met the RPC ±7%/4 mm criteria. The percent of points passing the gamma analysis for each treatment delivery was acceptable. Treatment plan quality of the Elekta VMAT, RapidArc and IMRT treatments were comparable for consistent dose prescriptions and constraints. Additionally, the dosimetric accuracy of the Elekta VMAT and RapidArc treatments was verified to be within acceptable tolerances.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El trabajo tematiza las representaciones sociales que circulan sobre las posibles trayectorias escolares de adolescentes de sectores populares y cómo las primeras influencian en ese recorrido escolar, condicionadas por la pertenencia de los alumnos a sectores populares, la localización de la escuela en circuitos educativos diferenciados y la trayectoria escolar que han tenido los padres de los adolescentes. Las expectativas cruzadas de docentes, padres y alumnos con respecto a los recorridos escolares y el lugar en el espacio social que ocupa cada agente, construyen un particular punto de vista acerca de la escuela y las trayectorias educativas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dataset presents Differential Global Positioning System data (DGPS) acquired within the Bossons glacier proglacial area. Bossons glacier is a rapidly retreating glacier and its proglacial area is deglaciated for ~30 years. Bossons stream is one of the outlets of the subglacial drainage system. It starts as a 800 m steep cascade reach, then flows through an area with gentler slope : the Plan des Eaux (PdE). PdE is a 300 m long, 50 m wide proglacial alluvial plain with an increasing channel mobility in the downstream direction but decreasing slope gradient and incision. As it may act a sediment trap, studying periglacial and proglacial erosion processes in the Bossons catchment requires to quantify PdE sediment volume evolution. A several meter-sized block located within Bossons proglacial area was set up as GPS base : its location was measured by one antenna (Topcon Hyper Pro) by performing 600 consecutive measurements throughout one day. A second antenna (Topcon Hyper Pro) was then used to measure XYZ location of points in the proglacial area with a ~2 m grid. Radio communication between the two antennas allowed differential calculations to be automatically carried out on field using the Topcon FC-250 hand controller. This methodology yields 3 cm XY and 1.5 cm Z uncertainties. DGPS data have been acquired through 10 campaigns from 2004 to 2014; campaigns from 2004 to 2008 cover a smaller area than those from 2010 to 2014. Digital Elevation Model (DEM) have been interpolated from DGPS data and difference between two DEMs yields deposited and eroded volume within PdE. Maps of PdE volume variation between two campaigns show that incision mainly occurs in the upper and lower sections where as deposition dominates in the middle section. Deposition, denudation and net rate (deposition rate - denudation rate) are calculated by normalizing volumes by DEM areas. Deposition dominates results with a mean net rate of 29 mm/yr. However, strong inter-annual variability exists and some years are dominated by denudation : -36 mm/yr and -100 mm/yr for 2006 and 2011, respectively. Nonetheless, oldest campaigns (2004 to 2008) were carried out on the lower part part of the alluvial plain and ruling them out to keep only complete DEM (2010 to 2014) yields a mean net rate of ~15 mm/yr. This results is coherent with field observations of both strong deposition (e.g. flood deposits) and strong erosion (e.g. 30 cm incision) evidences. Bossons glacier proglacial area is thus dynamic with year-to-year geormorphological changes but may leans toward increasing its mean elevation through a deposition dominated system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The distribution of seagrass and associated benthic communities on the reef and lagoon of Low Isles, Great Barrier Reef, was mapped between the 29 July and 29 August 1997. For this survey, observers walked or free-dived at survey points positioned approximately 50 m apart along a series of transects. Visual estimates of above-ground seagrass biomass and % cover of each benthos and substrate type were recorded at each survey point. A differential handheld global positioning system (GPS) was used to locate each survey point (accuracy ±3m). A total of 349 benthic survey points were examined. To assist with mapping meadow/habitat type boundaries, an additional 177 field points were assessed and a georeferenced 1:12,000 aerial photograph (26th August 1997) was used as a secondary source of information. Bathymetric data (elevation below Mean Sea Level) measured at each point assessed and from Ellison (1997) supplemented information used to determine boundaries, particularly in the subtidal lagoon. 127.8 ±29.6 hectares was mapped. Seagrass and associated benthic community data was derived by haphazardly placing 3 quadrats (0.25m**2) at each survey point. Seagrass above ground biomass (standing crop, grams dry weight (g DW m**-2)) was determined within each quadrat using a non-destructive visual estimates of biomass technique and the seagrass species present identified. In addition, the cover of all benthos was measured within each of the 3 quadrats using a systematic 5 point method. For each quadrat, frequency of occurrence for each benthic category was converted to a percentage of the total number of points (5 per quadrat). Data are presented as the average of the 3 quadrats at each point. Polygons of discrete seagrass meadow/habitat type boundaries were created using the on-screen digitising functions of ArcGIS (ESRI Inc.), differentiated on the basis of colour, texture, and the geomorphic and geographical context. The resulting seagrass and benthic cover data of each survey point and for each seagrass meadow/habitat type was linked to GPS coordinates, saved as an ArcMap point and polygon shapefile, respectively, and projected to Universal Transverse Mercator WGS84 Zone 55 South.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new method is presented to generate reduced order models (ROMs) in Fluid Dynamics problems of industrial interest. The method is based on the expansion of the flow variables in a Proper Orthogonal Decomposition (POD) basis, calculated from a limited number of snapshots, which are obtained via Computational Fluid Dynamics (CFD). Then, the POD-mode amplitudes are calculated as minimizers of a properly defined overall residual of the equations and boundary conditions. The method includes various ingredients that are new in this field. The residual can be calculated using only a limited number of points in the flow field, which can be scattered either all over the whole computational domain or over a smaller projection window. The resulting ROM is both computationally efficient(reconstructed flow fields require, in cases that do not present shock waves, less than 1 % of the time needed to compute a full CFD solution) and flexible(the projection window can avoid regions of large localized CFD errors).Also, for problems related with aerodynamics, POD modes are obtained from a set of snapshots calculated by a CFD method based on the compressible Navier Stokes equations and a turbulence model (which further more includes some unphysical stabilizing terms that are included for purely numerical reasons), but projection onto the POD manifold is made using the inviscid Euler equations, which makes the method independent of the CFD scheme. In addition, shock waves are treated specifically in the POD description, to avoid the need of using a too large number of snapshots. Various definitions of the residual are also discussed, along with the number and distribution of snapshots, the number of retained modes, and the effect of CFD errors. The method is checked and discussed on several test problems that describe (i) heat transfer in the recirculation region downstream of a backwards facing step, (ii) the flow past a two-dimensional airfoil in both the subsonic and transonic regimes, and (iii) the flow past a three-dimensional horizontal tail plane. The method is both efficient and numerically robust in the sense that the computational effort is quite small compared to CFD and results are both reasonably accurate and largely insensitive to the definition of the residual, to CFD errors, and to the CFD method itself, which may contain artificial stabilizing terms. Thus, the method is amenable for practical engineering applications. Resumen Se presenta un nuevo método para generar modelos de orden reducido (ROMs) aplicado a problemas fluidodinámicos de interés industrial. El nuevo método se basa en la expansión de las variables fluidas en una base POD, calculada a partir de un cierto número de snapshots, los cuales se han obtenido gracias a simulaciones numéricas (CFD). A continuación, las amplitudes de los modos POD se calculan minimizando un residual global adecuadamente definido que combina las ecuaciones y las condiciones de contorno. El método incluye varios ingredientes que son nuevos en este campo de estudio. El residual puede calcularse utilizando únicamente un número limitado de puntos del campo fluido. Estos puntos puede encontrarse dispersos a lo largo del dominio computacional completo o sobre una ventana de proyección. El modelo ROM obtenido es tanto computacionalmente eficiente (en aquellos casos que no presentan ondas de choque reconstruir los campos fluidos requiere menos del 1% del tiempo necesario para calcular una solución CFD) como flexible (la ventana de proyección puede escogerse de forma que evite contener regiones con errores en la solución CFD localizados y grandes). Además, en problemas aerodinámicos, los modos POD se obtienen de un conjunto de snapshots calculados utilizando un código CFD basado en la versión compresible de las ecuaciones de Navier Stokes y un modelo de turbulencia (el cual puede incluir algunos términos estabilizadores sin sentido físico que se añaden por razones puramente numéricas), aunque la proyección en la variedad POD se hace utilizando las ecuaciones de Euler, lo que hace al método independiente del esquema utilizado en el código CFD. Además, las ondas de choque se tratan específicamente en la descripción POD para evitar la necesidad de utilizar un número demasiado grande de snapshots. Varias definiciones del residual se discuten, así como el número y distribución de los snapshots,el número de modos retenidos y el efecto de los errores debidos al CFD. El método se comprueba y discute para varios problemas de evaluación que describen (i) la transferencia de calor en la región de recirculación aguas abajo de un escalón, (ii) el flujo alrededor de un perfil bidimensional en regímenes subsónico y transónico y (iii) el flujo alrededor de un estabilizador horizontal tridimensional. El método es tanto eficiente como numéricamente robusto en el sentido de que el esfuerzo computacional es muy pequeño comparado con el requerido por el CFD y los resultados son razonablemente precisos y muy insensibles a la definición del residual, los errores debidos al CFD y al método CFD en sí mismo, el cual puede contener términos estabilizadores artificiales. Por lo tanto, el método puede utilizarse en aplicaciones prácticas de ingeniería.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Human identification from a skull is a critical process in legal and forensic medicine, specially when no other means are available. Traditional clay-based methods attempt to generate the human face, in order to identify the corresponding person. However, these reconstructions lack of objectivity and consistence, since they depend on the practitioner. Current computerized techniques are based on facial models, which introduce undesired facial features when the final reconstruction is built. This paper presents an objective 3D craniofacial reconstruction technique, implemented in a graphic application, without using any facial template. The only information required by the software tool is the 3D image of the target skull and three parameters: age, gender and Body Mass Index (BMI) of the individual. Complexity is minimized, since the application database only consists of the anthropological information provided by soft tissue depth values in a set of points of the skull.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La mayor parte de los entornos diseñados por el hombre presentan características geométricas específicas. En ellos es frecuente encontrar formas poligonales, rectangulares, circulares . . . con una serie de relaciones típicas entre distintos elementos del entorno. Introducir este tipo de conocimiento en el proceso de construcción de mapas de un robot móvil puede mejorar notablemente la calidad y la precisión de los mapas resultantes. También puede hacerlos más útiles de cara a un razonamiento de más alto nivel. Cuando la construcción de mapas se formula en un marco probabilístico Bayesiano, una especificación completa del problema requiere considerar cierta información a priori sobre el tipo de entorno. El conocimiento previo puede aplicarse de varias maneras, en esta tesis se presentan dos marcos diferentes: uno basado en el uso de primitivas geométricas y otro que emplea un método de representación cercano al espacio de las medidas brutas. Un enfoque basado en características geométricas supone implícitamente imponer un cierto modelo a priori para el entorno. En este sentido, el desarrollo de una solución al problema SLAM mediante la optimización de un grafo de características geométricas constituye un primer paso hacia nuevos métodos de construcción de mapas en entornos estructurados. En el primero de los dos marcos propuestos, el sistema deduce la información a priori a aplicar en cada caso en base a una extensa colección de posibles modelos geométricos genéricos, siguiendo un método de Maximización de la Esperanza para hallar la estructura y el mapa más probables. La representación de la estructura del entorno se basa en un enfoque jerárquico, con diferentes niveles de abstracción para los distintos elementos geométricos que puedan describirlo. Se llevaron a cabo diversos experimentos para mostrar la versatilidad y el buen funcionamiento del método propuesto. En el segundo marco, el usuario puede definir diferentes modelos de estructura para el entorno mediante grupos de restricciones y energías locales entre puntos vecinos de un conjunto de datos del mismo. El grupo de restricciones que se aplica a cada grupo de puntos depende de la topología, que es inferida por el propio sistema. De este modo, se pueden incorporar nuevos modelos genéricos de estructura para el entorno con gran flexibilidad y facilidad. Se realizaron distintos experimentos para demostrar la flexibilidad y los buenos resultados del enfoque propuesto. Abstract Most human designed environments present specific geometrical characteristics. In them, it is easy to find polygonal, rectangular and circular shapes, with a series of typical relations between different elements of the environment. Introducing this kind of knowledge in the mapping process of mobile robots can notably improve the quality and accuracy of the resulting maps. It can also make them more suitable for higher level reasoning applications. When mapping is formulated in a Bayesian probabilistic framework, a complete specification of the problem requires considering a prior for the environment. The prior over the structure of the environment can be applied in several ways; this dissertation presents two different frameworks, one using a feature based approach and another one employing a dense representation close to the measurements space. A feature based approach implicitly imposes a prior for the environment. In this sense, feature based graph SLAM was a first step towards a new mapping solution for structured scenarios. In the first framework, the prior is inferred by the system from a wide collection of feature based priors, following an Expectation-Maximization approach to obtain the most probable structure and the most probable map. The representation of the structure of the environment is based on a hierarchical model with different levels of abstraction for the geometrical elements describing it. Various experiments were conducted to show the versatility and the good performance of the proposed method. In the second framework, different priors can be defined by the user as sets of local constraints and energies for consecutive points in a range scan from a given environment. The set of constraints applied to each group of points depends on the topology, which is inferred by the system. This way, flexible and generic priors can be incorporated very easily. Several tests were carried out to demonstrate the flexibility and the good results of the proposed approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The problem of finding a minimum area polygonization for a given set of points in the plane, Minimum Area Polygonization (MAP) is NP-hard. Due to the complexity of the problem we aim at the development of algorithms to obtain approximate solutions. In this work, we suggest di?erent strategies in order to minimize the polygonization area.We propose algorithms to search for approximate solutions for MAP problem. We present an experimental study for a set of instances for MAP problem.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fractal and multifractal are concepts that have grown increasingly popular in recent years in the soil analysis, along with the development of fractal models. One of the common steps is to calculate the slope of a linear fit commonly using least squares method. This shouldn?t be a special problem, however, in many situations using experimental data the researcher has to select the range of scales at which is going to work neglecting the rest of points to achieve the best linearity that in this type of analysis is necessary. Robust regression is a form of regression analysis designed to circumvent some limitations of traditional parametric and non-parametric methods. In this method we don?t have to assume that the outlier point is simply an extreme observation drawn from the tail of a normal distribution not compromising the validity of the regression results. In this work we have evaluated the capacity of robust regression to select the points in the experimental data used trying to avoid subjective choices. Based on this analysis we have developed a new work methodology that implies two basic steps: ? Evaluation of the improvement of linear fitting when consecutive points are eliminated based on R pvalue. In this way we consider the implications of reducing the number of points. ? Evaluation of the significance of slope difference between fitting with the two extremes points and fitted with the available points. We compare the results applying this methodology and the common used least squares one. The data selected for these comparisons are coming from experimental soil roughness transect and simulated based on middle point displacement method adding tendencies and noise. The results are discussed indicating the advantages and disadvantages of each methodology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La gestión del conocimiento (KM) se basa en la captación, filtración, procesamiento y análisis de unos datos en bruto que con dicho refinamiento podrán llegar a convertirse en conocimiento o Sabiduría. Estas prácticas tendrán lugar en este PFC en una WSN (Wireless Sensor Network) compuesta de unos sofisticados dispositivos comúnmente conocidos como “motas” y cuya principal característica son sus bajas capacidades en cuanto a memoria, batería o autonomía. Ha sido objetivo primordial de este Proyecto de fin de Carrera aunar una WSN con la Gestión del Conocimiento así como demostrar que es posible llevar a cabo grandes procesamientos de información, con tan bajas capacidades, si se distribuyen correctamente los procesos. En primera instancia, se introducen conceptos básicos acerca de las WSN (Wireless Sensor Networks) así como de los elementos principales en dichas redes. Tras conocer el modelo de arquitectura de comunicaciones se procede a presentar la Gestión del Conocimiento de forma teórica y a continuación la interpretación que se ha hecho a partir de diversas referencias bibliográficas para llevar a cabo la implementación del proyecto. El siguiente paso es describir punto por punto todos los componentes del Simulador; librerías, funcionamiento y demás cuestiones sobre configuración y puesta a punto. Como escenario de aplicación se plantea una red de sensores inalámbricos básica cuya topología y ubicación es completamente configurable. Se lleva a cabo una configuración a nivel de red basada en el protocolo 6LowPAN pero con posibilidad de simplificarlo. Los datos se procesan de acuerdo a un modelo piramidal de Gestión de Conocimiento adaptable a las necesidades del usuario. Mediante la utilización de las diversas opciones que proporciona la interfaz gráfica implementada y los documentos de resultados que se van generando, se puede llevar a cabo un detallado estudio posterior de la simulación y comprobar si se cumplen las expectativas planteadas. Knowledge management (KM) is based on the collection, filtering, processing and analysis of some raw data which such refinement it can be turned into knowledge or wisdom. These practices will take place in a WSN (Wireless Sensor Network) consists of sophisticated devices commonly known as "dots" and whose main characteristics are its low capacity for memory, battery or autonomy. A primary objective of this Project will be to join a WSN with Knowledge Management and show that it is possible make largo information processing, with such low capacity if the processes are properly distributed. First, we introduce basic concepts about the WSN (Wireless Sensor Networks) and major elements of these networks. After meeting the communications architecture model, we proceed to show the Knowledge Management theory and then the interpretation of several bibliographic references to carry out the project implementation. The next step is discovering point by point all over the Simulator components; libraries, operation and the rest of points about configuration and tuning. As application scenario we propose a basic wireless sensor network whose topology and location is completely customizable. It will perform a network level configuration based in W6LowPAN Protocol. Data is processed according to a pyramidal pattern Knowledge Management adaptable to the user´s needs. The hardware elements will suffer more or less energy dependence depending on their role and activity in the network. Through the various options that provide the graphical interface has been implemented and results documents that are generated, can be carried out after a detailed study of the simulation and verify compliance with the expectations raised.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Los sistemas de registro aerotransportados que utilizan láser (LiDAR) se están convirtiendo en el principal instrumental para la recogida de la información cartográfica debido, principalmente, a la gran densidad de puntos, precisión alcanzada y rapidez en la obtención de modelos digitales. Sin embargo, sería importante disponer de algoritmos que permitan filtrar la información, seleccionando aquellos puntos medidos en zonas deseadas. Cuando se miden zonas urbanas, los elementos más importantes son las edificaciones. Por ello, se propone un nuevo algoritmo que permite clasificar y diferenciar aquellos puntos medidos sobre edificios, extrayendo, como resultado, el límite exterior que definen, de tal forma que se podría calcular la zona edificada. Abstarct: Registration systems using airborne laser (LIDAR) are becoming the main device for the collection of cartographic information, mainly due to the high density of points, accuracy and rate achieved in obtaining digital models. However, it would be important to have algorithms that filter the information by selecting those points measured in targeted areas. When measuring urban areas, buildings are the most important objects. Therefore, a new algorithm is proposed to classify those measured points on buildings and to compute their outer boundaries, so the built up area can be computed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nowadays CPV trends mostly based in lens parqueted flat modules, enable the separate design of the sun tracker. To enable this possibility a set of specifications is to be prescribed for the tracker design team, which take into account fundamental requisites such as the maximum service loads both permanent and variable, the sun tracking accuracy and the tracker structural stiffness required to maintain the CPV array acceptance angle loss below a certain threshold. In its first part this paper outlines the author’s approach to confront these issues. Next, a method is introduced to estimate the acceptance angle losses due to the tracker’s structural flexure, which in last instance relies in the computation of the minimum enclosing circle of a set of points in the plane. This method is also useful to simulate the drifts in the tracker’s pointing vector due to structural deformation as a function of the aperture orientation angle. Results of this method when applied to the design of a two axis CPV pedestal tracker are presented.