24 resultados para transmission of data and images
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
In this paper, an advanced technique for the generation of deformation maps using synthetic aperture radar (SAR) data is presented. The algorithm estimates the linear and nonlinear components of the displacement, the error of the digital elevation model (DEM) used to cancel the topographic terms, and the atmospheric artifacts from a reduced set of low spatial resolution interferograms. The pixel candidates are selected from those presenting a good coherence level in the whole set of interferograms and the resulting nonuniform mesh tessellated with the Delauney triangulation to establish connections among them. The linear component of movement and DEM error are estimated adjusting a linear model to the data only on the connections. Later on, this information, once unwrapped to retrieve the absolute values, is used to calculate the nonlinear component of movement and atmospheric artifacts with alternate filtering techniques in both the temporal and spatial domains. The method presents high flexibility with respect to the required number of images and the baselines length. However, better results are obtained with large datasets of short baseline interferograms. The technique has been tested with European Remote Sensing SAR data from an area of Catalonia (Spain) and validated with on-field precise leveling measurements.
Resumo:
Són molts els estudis que avui en dia incideixen en la necessitat d’oferir un suport metodològic i psicològic als aprenents que treballen de manera autònoma. L’objectiu d’aquest suport és ajudar-los a desenvolupar les destreses que necessiten per dirigir el seu aprenentatge així com una actitud positiva i una major conscienciació envers aquest aprenentatge. En definitiva, aquests dos tipus de preparació es consideren essencials per ajudar els aprenents a esdevenir més autònoms i més eficients en el seu propi aprenentatge. Malgrat això, si bé és freqüent trobar estudis que exemplifiquen aplicacions del suport metodològic dins els seus programes, principalment en la formació d’estratègies o ajudant els aprenents a desenvolupar un pla de treball, aquest no és el cas quan es tracta de la seva preparació psicològica. Amb rares excepcions, trobem estudis que documentin com s’incideix en les actituds i en les creences dels aprenents, també coneguts com a coneixement metacognitiu (CM), en programes que fomenten l’autonomia en l’aprenentatge. Els objectius d’aquest treball son dos: a) oferir una revisió d’estudis que han utilitzat diferents mitjans per incidir en el CM dels aprenents i b) descriure les febleses i avantatges dels procediments i instruments que utilitzen, tal com han estat valorats en estudis de recerca, ja que ens permetrà establir criteris objectius sobre com i quan utilitzar-los en programes que fomentin l’aprenentatge autodirigit.
Resumo:
Report for the scientific sojourn carried out at the University of New South Wales from February to June the 2007. Two different biogeochemical models are coupled to a three dimensional configuration of the Princeton Ocean Model (POM) for the Northwestern Mediterranean Sea (Ahumada and Cruzado, 2007). The first biogeochemical model (BLANES) is the three-dimensional version of the model described by Bahamon and Cruzado (2003) and computes the nitrogen fluxes through six compartments using semi-empirical descriptions of biological processes. The second biogeochemical model (BIOMEC) is the biomechanical NPZD model described in Baird et al. (2004), which uses a combination of physiological and physical descriptions to quantify the rates of planktonic interactions. Physical descriptions include, for example, the diffusion of nutrients to phytoplankton cells and the encounter rate of predators and prey. The link between physical and biogeochemical processes in both models is expressed by the advection-diffusion of the non-conservative tracers. The similarities in the mathematical formulation of the biogeochemical processes in the two models are exploited to determine the parameter set for the biomechanical model that best fits the parameter set used in the first model. Three years of integration have been carried out for each model to reach the so called perpetual year run for biogeochemical conditions. Outputs from both models are averaged monthly and then compared to remote sensing images obtained from sensor MERIS for chlorophyll.
Resumo:
Research carried out in several Anglo-Saxon countries shows that many undergraduatesidentify oral sex and anal sex as examples of abstinent behaviour, while manyothers consider kissing and masturbation as examples of having sex. The objective ofthis research was to investigate whether a sample of Spanish students gave similarreplies. Seven hundred and fifty undergraduates (92% aged under 26, 67.6%women) produced examples or definitions of the term ‘abstinence’. Spanish studentsmade similar errors to those observed in the Anglo-Saxon samples, in thatbehaviours that were abstinent from a preventive point of view (masturbating andsex without penetration) were not considered as such, while a number of studentsreported oral sex as abstinent behaviour. The results suggest that the information onrisky and preventive sexual behaviour should cease to use ambiguous or euphemisticexpressions and use vocabulary that is clear and comprehensible to everyone
Resumo:
The spectral efficiency achievable with joint processing of pilot and data symbol observations is compared with that achievable through the conventional (separate) approach of first estimating the channel on the basis of the pilot symbols alone, and subsequently detecting the datasymbols. Studied on the basis of a mutual information lower bound, joint processing is found to provide a non-negligible advantage relative to separate processing, particularly for fast fading. It is shown that, regardless of the fading rate, only a very small number of pilot symbols (at most one per transmit antenna and per channel coherence interval) shouldbe transmitted if joint processing is allowed.
Resumo:
When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.
Resumo:
Dual scaling of a subjects-by-objects table of dominance data (preferences,paired comparisons and successive categories data) has been contrasted with correspondence analysis, as if the two techniques were somehow different. In this note we show that dual scaling of dominance data is equivalent to the correspondence analysis of a table which is doubled with respect to subjects. We also show that the results of both methods can be recovered from a principal components analysis of the undoubled dominance table which is centred with respect to subject means.
Resumo:
We combine existing balance sheet and stock market data with two new datasets to studywhether, how much, and why bank lending to firms matters for the transmission of monetarypolicy. The first new dataset enables us to quantify the bank dependence of firms precisely,as the ratio of bank debt to total assets. We show that a two standard deviation increase inthe bank dependence of a firm makes its stock price about 25% more responsive to monetarypolicy shocks. We explore the channels through which this effect occurs, and find that thestock prices of bank-dependent firms that borrow from financially weaker banks display astronger sensitivity to monetary policy shocks. This finding is consistent with the banklending channel, a theory according to which the strength of bank balance sheets mattersfor monetary policy transmission. We construct a new database of hedging activities andshow that the stock prices of bank-dependent firms that hedge against interest rate riskdisplay a lower sensitivity to monetary policy shocks. This finding is consistent with aninterest rate pass-through channel that operates via the direct transmission of policy ratesto lending rates associated with the widespread use of floating-rates in bank loans and creditline agreements.
Resumo:
This paper describes the development and applications of a super-resolution method, known as Super-Resolution Variable-Pixel Linear Reconstruction. The algorithm works combining different lower resolution images in order to obtain, as a result, a higher resolution image. We show that it can make significant spatial resolution improvements to satellite images of the Earth¿s surface allowing recognition of objects with size approaching the limiting spatial resolution of the lower resolution images. The algorithm is based on the Variable-Pixel Linear Reconstruction algorithm developed by Fruchter and Hook, a well-known method in astronomy but never used for Earth remote sensing purposes. The algorithm preserves photometry, can weight input images according to the statistical significance of each pixel, and removes the effect of geometric distortion on both image shape and photometry. In this paper, we describe its development for remote sensing purposes, show the usefulness of the algorithm working with images as different to the astronomical images as the remote sensing ones, and show applications to: 1) a set of simulated multispectral images obtained from a real Quickbird image; and 2) a set of multispectral real Landsat Enhanced Thematic Mapper Plus (ETM+) images. These examples show that the algorithm provides a substantial improvement in limiting spatial resolution for both simulated and real data sets without significantly altering the multispectral content of the input low-resolution images, without amplifying the noise, and with very few artifacts.
Resumo:
In this study, we present a detailed structural characterization by means of transmission electron microscopy and Raman spectroscopy of polymorphous silicon (pm-Si:H) thin films deposited using radio-frequency dust-forming plasmas of SiH4 diluted in Ar. Square-wave modulation of the plasma and gas temperature was varied to obtain films with different nanostructures. Transmission electron microscopy and electron diffraction have shown the presence of Si crystallites of around 2 nm in the pm-Si:H films, which are related to the nanoparticles formed in the plasma gas phase coming from their different growth stages, named particle nucleation and coagulation. Raman scattering has proved the role of the film nanostructure in the crystallization process induced ¿in situ¿ by laser heating.
Resumo:
Naive scale invariance is not a true property of natural images. Natural monochrome images possess a much richer geometrical structure, which is particularly well described in terms of multiscaling relations. This means that the pixels of a given image can be decomposed into sets, the fractal components of the image, with well-defined scaling exponents [Turiel and Parga, Neural Comput. 12, 763 (2000)]. Here it is shown that hyperspectral representations of natural scenes also exhibit multiscaling properties, observing the same kind of behavior. A precise measure of the informational relevance of the fractal components is also given, and it is shown that there are important differences between the intrinsically redundant red-green-blue system and the decorrelated one defined in Ruderman, Cronin, and Chiao [J. Opt. Soc. Am. A 15, 2036 (1998)].
Resumo:
This paper reviews the concept of presence in immersive virtual environments, the sense of being there signalled by people acting and responding realistically to virtual situations and events. We argue that presence is a unique phenomenon that must be distinguished from the degree of engagement, involvement in the portrayed environment. We argue that there are three necessary conditions for presence: the (a) consistent low latency sensorimotor loop between sensory data and proprioception; (b) statistical plausibility: images must be statistically plausible in relation to the probability distribution of images over natural scenes. A constraint on this plausibility is the level of immersion;(c) behaviour-response correlations: Presence may be enhanced and maintained over time by appropriate correlations between the state and behaviour of participants and responses within the environment, correlations that show appropriate responses to the activity of the participants. We conclude with a discussion of methods for assessing whether presence occurs, and in particular recommend the approach of comparison with ground truth and give some examples of this.
Resumo:
This article details the use of photographic rectification as support for the graphic documentation of historical and archaeological heritage and specifically the southern facade of the Torre del Pretori (Praetorium Tower) in Tarragona. The Praetorium Tower is part of a larger monumental complex and one of the towers that connected different parts of the Tarraco Provincial Forum, the politic-administrative centre of the ancient capital of Hispania Citerioris. It is therefore a valuable example of the evolution of Roman urban architecture. The aim of this project is to provide accurate graphic documentation of the structure to facilitate the restoration and conservation of the tower, as well as to provide a more profound architectural and archaeological understanding of the Roman forum. The use of photographic rectification enabled us to overcome the spatial and time difficulties involved in collecting data caused by the size and location of the building. Specific software made it easier to obtain accurate two-dimensional images. For this reason, in our case, photographic rectification helped us to make a direct analysis of the monument and facilitated interpretation of the architectural stratigraphy. We currently separate the line of research into two concepts: the construction processes and the architecture of the building. The documentation collected permitted various analyses: the characterisation of the building modules, identification of the tools used to work the building materials, etc. In conclusion, the use of orthoimages is a powerful tool that permits the systematic study of a Roman building that has evolved over the centuries and is now in a modern urban context.
Resumo:
The objective of this paper is to examine whether informal labor markets affect the flows of Foreign Direct Investment (FDI), and also whether this effect is similar in developed and developing countries. With this aim, different public data sources, such as the World Bank (WB), and the United Nations Conference on Trade and Development (UNCTAD) are used, and panel econometric models are estimated for a sample of 65 countries over a 14 year period (1996-2009). In addition, this paper uses a dynamic model as an extension of the analysis to establish whether such an effect exists and what its indicators and significance may be.