27 resultados para Coherent Pixels Technique
Resumo:
This paper presents the implementation details of a coded structured light system for rapid shape acquisition of unknown surfaces. Such techniques are based on the projection of patterns onto a measuring surface and grabbing images of every projection with a camera. Analyzing the pattern deformations that appear in the images, 3D information of the surface can be calculated. The implemented technique projects a unique pattern so that it can be used to measure moving surfaces. The structure of the pattern is a grid where the color of the slits are selected using a De Bruijn sequence. Moreover, since both axis of the pattern are coded, the cross points of the grid have two codewords (which permits to reconstruct them very precisely), while pixels belonging to horizontal and vertical slits have also a codeword. Different sets of colors are used for horizontal and vertical slits, so the resulting pattern is invariant to rotation. Therefore, the alignment constraint between camera and projector considered by a lot of authors is not necessary
Resumo:
This paper shows the numerous problems of conventional economic analysis in the evaluation of climate change mitigation policies. The article points out the many limitations, omissions, and the arbitrariness that have characterized most evaluation models applied up until now. These shortcomings, in an almost overwhelming way, have biased the result towards the recommendation of a lower aggressiveness of emission mitigation policies. Consequently, this paper questions whether these results provide an appropriate answer to the problem. Finally, various points that an analysis coherent with sustainable development should take into account are presented.
Resumo:
Graph pebbling is a network model for studying whether or not a given supply of discrete pebbles can satisfy a given demand via pebbling moves. A pebbling move across an edge of a graph takes two pebbles from one endpoint and places one pebble at the other endpoint; the other pebble is lost in transit as a toll. It has been shown that deciding whether a supply can meet a demand on a graph is NP-complete. The pebbling number of a graph is the smallest t such that every supply of t pebbles can satisfy every demand of one pebble. Deciding if the pebbling number is at most k is NP 2 -complete. In this paper we develop a tool, called theWeight Function Lemma, for computing upper bounds and sometimes exact values for pebbling numbers with the assistance of linear optimization. With this tool we are able to calculate the pebbling numbers of much larger graphs than in previous algorithms, and much more quickly as well. We also obtain results for many families of graphs, in many cases by hand, with much simpler and remarkably shorter proofs than given in previously existing arguments (certificates typically of size at most the number of vertices times the maximum degree), especially for highly symmetric graphs. Here we apply theWeight Function Lemma to several specific graphs, including the Petersen, Lemke, 4th weak Bruhat, Lemke squared, and two random graphs, as well as to a number of infinite families of graphs, such as trees, cycles, graph powers of cycles, cubes, and some generalized Petersen and Coxeter graphs. This partly answers a question of Pachter, et al., by computing the pebbling exponent of cycles to within an asymptotically small range. It is conceivable that this method yields an approximation algorithm for graph pebbling.
Resumo:
Planners in public and private institutions would like coherent forecasts of the components of age-specic mortality, such as causes of death. This has been di cult toachieve because the relative values of the forecast components often fail to behave ina way that is coherent with historical experience. In addition, when the group forecasts are combined the result is often incompatible with an all-groups forecast. It hasbeen shown that cause-specic mortality forecasts are pessimistic when compared withall-cause forecasts (Wilmoth, 1995). This paper abandons the conventional approachof using log mortality rates and forecasts the density of deaths in the life table. Sincethese values obey a unit sum constraint for both conventional single-decrement life tables (only one absorbing state) and multiple-decrement tables (more than one absorbingstate), they are intrinsically relative rather than absolute values across decrements aswell as ages. Using the methods of Compositional Data Analysis pioneered by Aitchison(1986), death densities are transformed into the real space so that the full range of multivariate statistics can be applied, then back-transformed to positive values so that theunit sum constraint is honoured. The structure of the best-known, single-decrementmortality-rate forecasting model, devised by Lee and Carter (1992), is expressed incompositional form and the results from the two models are compared. The compositional model is extended to a multiple-decrement form and used to forecast mortalityby cause of death for Japan
Resumo:
In the present work, microstructure improvement using FSP (Friction Stir Processing) is studied. In the first part of the work, the microstructure improvement of as-cast A356 is demonstrated. Some tensile tests were applied to check the increase in ductility. However, the expected results couldn’t be achieved. In the second part, the microstructure improvement of a fusion weld in 1050 aluminium alloy is presented. Hardness tests were carried out to prove the mechanical propertyimprovements. In the third and last part, the microstructure improvement of 1050 aluminium alloy is achieved. A discussion of the mechanical property improvements induced by FSP is made. The influence of tool traverse speed on microstructure and mechanical properties is also discussed. Hardness tests and recrystallization theory enabled us to find out such influence
Resumo:
Subcompositional coherence is a fundamental property of Aitchison s approach to compositional data analysis, and is the principal justification for using ratios of components. We maintain, however, that lack of subcompositional coherence, that is incoherence, can be measured in an attempt to evaluate whether any given technique is close enough, for all practical purposes, to being subcompositionally coherent. This opens up the field to alternative methods, which might be better suited to cope with problems such as data zeros and outliers, while being only slightly incoherent. The measure that we propose is based on the distance measure between components. We show that the two-part subcompositions, which appear to be the most sensitive to subcompositional incoherence, can be used to establish a distance matrix which can be directly compared with the pairwise distances in the full composition. The closeness of these two matrices can be quantified using a stress measure that is common in multidimensional scaling, providing a measure of subcompositional incoherence. The approach is illustrated using power-transformed correspondence analysis, which has already been shown to converge to log-ratio analysis as the power transform tends to zero.
Resumo:
Weather radar observations are currently the most reliable method for remote sensing of precipitation. However, a number of factors affect the quality of radar observations and may limit seriously automated quantitative applications of radar precipitation estimates such as those required in Numerical Weather Prediction (NWP) data assimilation or in hydrological models. In this paper, a technique to correct two different problems typically present in radar data is presented and evaluated. The aspects dealt with are non-precipitating echoes - caused either by permanent ground clutter or by anomalous propagation of the radar beam (anaprop echoes) - and also topographical beam blockage. The correction technique is based in the computation of realistic beam propagation trajectories based upon recent radiosonde observations instead of assuming standard radio propagation conditions. The correction consists of three different steps: 1) calculation of a Dynamic Elevation Map which provides the minimum clutter-free antenna elevation for each pixel within the radar coverage; 2) correction for residual anaprop, checking the vertical reflectivity gradients within the radar volume; and 3) topographical beam blockage estimation and correction using a geometric optics approach. The technique is evaluated with four case studies in the region of the Po Valley (N Italy) using a C-band Doppler radar and a network of raingauges providing hourly precipitation measurements. The case studies cover different seasons, different radio propagation conditions and also stratiform and convective precipitation type events. After applying the proposed correction, a comparison of the radar precipitation estimates with raingauges indicates a general reduction in both the root mean squared error and the fractional error variance indicating the efficiency and robustness of the procedure. Moreover, the technique presented is not computationally expensive so it seems well suited to be implemented in an operational environment.
Resumo:
The aim of this paper is to quantitatively characterize the climatology of daily precipitation indices in Catalonia (northeastern Iberian Peninsula) from 1951 to 2003. This work has been performed analyzing a subset of the ETCCDI (Expert Team on Climate Change Detection and Indices) precipitation indices calculated from a new interpolated dataset of daily precipitation, namely SPAIN02, regular at 0.2° horizontal resolution (around 20 km) and from two high-quality stations: the Ebro and Fabra observatories. Using a jack-knife technique, we have found that the sampling error of the SPAIN02 regional averaged is relatively low. The trend analysis has been implemented using a Circular Block Bootstrap procedure applicable to non-normal distributions and autocorrelated series. A running trend analysis has been applied to analyze the trend persistence. No general trends at a regional scale are observed, considering the annual or the seasonal regional averaged series of all the indices for all the time windows considered. Only the consecutive dry days index (CDD) at annual scale shows a locally coherent spatial trend pattern; around 30% of the Catalonia area has experienced an increase of around 2¿3 days decade¿1. The Ebro and Fabra observatories show a similar CDD trend, mainly due to the summer contribution. Besides this, a significant decrease in total precipitation (around ¿10 mm decade¿1) and in the index "highest precipitation amount in five-day period" (RX5DAY, around ¿5 mm decade¿1), have been found in summer for the Ebro observatory.
Resumo:
Avalanche photodiodes operated in the Geiger mode offer a high intrinsic gain as well as an excellent timing accuracy. These qualities make the sensor specially suitable for those applications where detectors with high sensitivity and low timing uncertainty are required. Moreover, they are compatible with standard CMOS technologies, allowing sensor and front-end electronics integration within the pixel cell. However, the sensor suffers from high levels of intrinsic noise, which may lead to erroneous results and limit the range of detectable signals. They also increase the amount of data that has to be stored. In this work, we present a pixel based on a Geiger-mode avalanche photodiode operated in the gated mode to reduce the probability to detect noise counts interfering with photon arrival events. The readout circuit is based on a two grounds scheme to enable low reverse bias overvoltages and consequently lessen the dark count rate. Experimental characterization of the fabricated pixel with the HV-AMS 0.35µm standard technology is also presented in this article.
Resumo:
The scaling up of the Hot Wire Chemical Vapor Deposition (HW-CVD) technique to large deposition area can be done using a catalytic net of equal spaced parallel filaments. The large area deposition limit is defined as the limit whenever a further increment of the catalytic net area does not affect the properties of the deposited film. This is the case when a dense catalytic net is spread on a surface considerably larger than that of the film substrate. To study this limit, a system able to hold a net of twelve wires covering a surface of about 20 cm x 20 cm was used to deposit amorphous (a-Si:H) and microcrystalline (μc-Si:H) silicon over a substrate of 10 cm x 10 cm placed at a filament-substrate distance ranging from 1 to 2 cm. The uniformity of the film thickness d and optical constants, n(x, λ) and α(x,¯hω), was studied via transmission measurements. The thin film uniformity as a function of the filament-substrate distance was studied. The experimental thickness profile was compared with the theoretical result obtained solving the diffusion equations. The optimization of the filament-substrate distance allowed obtaining films with inhomogeneities lower than ±2.5% and deposition rates higher than 1 nm/s and 4.5 nm/s for (μc-Si:H) and (a-Si:H), respectively.
Resumo:
The formation of coherently strained three-dimensional (3D) islands on top of the wetting layer in the Stranski-Krastanov mode of growth is considered in a model in 1 + 1 dimensions accounting for the anharmonicity and nonconvexity of the real interatomic forces. It is shown that coherent 3D islands can be expected to form in compressed rather than expanded overlayers beyond a critical lattice misfit. In expanded overlayers the classical Stranski-Krastanov growth is expected to occur because the misfit dislocations can become energetically favored at smaller island sizes. The thermodynamic reason for coherent 3D islanding is incomplete wetting owing to the weaker adhesion of the edge atoms. Monolayer height islands with a critical size appear as necessary precursors of the 3D islands. This explains the experimentally observed narrow size distribution of the 3D islands. The 2D-3D transformation takes place by consecutive rearrangements of mono- to bilayer, bi- to trilayer islands, etc., after the corresponding critical sizes have been exceeded. The rearrangements are initiated by nucleation events, each one needing to overcome a lower energetic barrier than the one before. The model is in good qualitative agreement with available experimental observations.
Resumo:
We consider Brownian motion on a line terminated by two trapping points. A bias term in the form of a telegraph signal is applied to this system. It is shown that the first two moments of survival time exhibit a minimum at the same resonant frequency.
Resumo:
The objective of this study was to assess the applicability of posterior wall repair with a synthetic absorbable mesh. Between January and September 1996, five posterior repairs using absorbable synthetic meshes were performed. Five posterior wall repairs in patients matched for age, parity, and rectocele degree were performed according to usual procedures during the same period, and were used as controls. No febrile morbidity, cuff or posterior vaginal wall infections, thrombophlebitis, rectal injury, or hemorrhagic complications were observed in the 10 women who entered the study. In summary, posterior wall repair can be easily performed with an absorbable soft tissue patch, theoretically preserving sexual activity, and probably offers better functional results with longer experience, thus providing a safe and useful procedure in sexually active women.
Resumo:
The observation of coherent tunnelling in Cu2+ - and Ag2+ -doped MgO and CaO:Cu2+ was a crucial discovery in the realm of the Jahn-Teller (JT) effect. The main reasons favoring this dynamic behavior are now clarified through ab initio calculations on Cu2+ - and Ag2+ -doped cubic oxides. Small JT distortions and an unexpected low anharmonicity of the eg JT mode are behind energy barriers smaller than 25 cm-1 derived through CASPT2 calculations for Cu2+ - and Ag2+ -doped MgO and CaO:Cu2+ . The low anharmonicity is shown to come from a strong vibrational coupling of MO610- units (M=Cu,Ag) to the host lattice. The average distance between the d9 impurity and ligands is found to vary significantly on passing from MgO to SrO following to a good extent the lattice parameter.