988 resultados para REAL-SPACE
Resumo:
We work on some general extensions of the formalism for theories which preserve the relativity of inertial frames with a nonlinear action of the Lorentz transformations on momentum space. Relativistic particle models invariant under the corresponding deformed symmetries are presented with particular emphasis on deformed dilatation transformations. The algebraic transformations relating the deformed symmetries with the usual (undeformed) ones are provided in order to preserve the Lorentz algebra. Two distinct cases are considered: a deformed dilatation transformation with a spacelike preferred direction and a very special relativity embedding with a lightlike preferred direction. In both analysis we consider the possibility of introducing quantum deformations of the corresponding symmetries such that the spacetime coordinates can be reconstructed and the particular form of the real space-momentum commutator remains covariant. Eventually feasible experiments, for which the nonlinear Lorentz dilatation effects here pointed out may be detectable, are suggested.
Resumo:
Condensation processes are of key importance in nature and play a fundamental role in chemistry and physics. Owing to size effects at the nanoscale, it is conceptually desired to experimentally probe the dependence of condensate structure on the number of constituents one by one. Here we present an approach to study a condensation process atom-by-atom with the scanning tunnelling microscope, which provides a direct real-space access with atomic precision to the aggregates formed in atomically defined 'quantum boxes'. Our analysis reveals the subtle interplay of competing directional and nondirectional interactions in the emergence of structure and provides unprecedented input for the structural comparison with quantum mechanical models. This approach focuses on-but is not limited to-the model case of xenon condensation and goes significantly beyond the well-established statistical size analysis of clusters in atomic or molecular beams by mass spectrometry.
Resumo:
This paper makes two points. First, we show that the line-of-sight solution to cosmic microwave anisotropies in Fourier space, even though formally defined for arbitrarily large wavelengths, leads to position-space solutions which only depend on the sources of anisotropies inside the past light cone of the observer. This foretold manifestation of causality in position (real) space happens order by order in a series expansion in powers of the visibility gamma = e(-mu), where mu is the optical depth to Thomson scattering. We show that the contributions of order gamma(N) to the cosmic microwave background (CMB) anisotropies are regulated by spacetime window functions which have support only inside the past light cone of the point of observation. Second, we show that the Fourier-Bessel expansion of the physical fields (including the temperature and polarization momenta) is an alternative to the usual Fourier basis as a framework to compute the anisotropies. The viability of the Fourier-Bessel series for treating the CMB is a consequence of the fact that the visibility function becomes exponentially small at redshifts z >> 10(3), effectively cutting off the past light cone and introducing a finite radius inside which initial conditions can affect physical observables measured at our position (x) over right arrow = 0 and time t(0). Hence, for each multipole l there is a discrete tower of momenta k(il) (not a continuum) which can affect physical observables, with the smallest momenta being k(1l) similar to l. The Fourier-Bessel modes take into account precisely the information from the sources of anisotropies that propagates from the initial value surface to the point of observation-no more, no less. We also show that the physical observables (the temperature and polarization maps), and hence the angular power spectra, are unaffected by that choice of basis. This implies that the Fourier-Bessel expansion is the optimal scheme with which one can compute CMB anisotropies.
Resumo:
We present some exact results for the effect of disorder on the critical properties of an anisotropic XY spin chain in a transverse held. The continuum limit of the corresponding fermion model is taken and in various cases results in a Dirac equation with a random mass. Exact analytic techniques can then be used to evaluate the density of states and the localization length. In the presence of disorder the ferromagnetic-paramagnetic or Ising transition of the model is in the same universality class as the random transverse field Ising model solved by Fisher using a real-space renormalization-group decimation technique (RSRGDT). If there is only randomness in the anisotropy of the magnetic exchange then the anisotropy transition (from a ferromagnet in the x direction to a ferromagnet in the y direction) is also in this universality class. However, if there is randomness in the isotropic part of the exchange or in the transverse held then in a nonzero transverse field the anisotropy transition is destroyed by the disorder. We show that in the Griffiths' phase near the Ising transition that the ground-state energy has an essential singularity. The results obtained for the dynamical critical exponent, typical correlation length, and for the temperature dependence of the specific heat near the Ising transition agree with the results of the RSRODT and numerical work. [S0163-1829(99)07125-8].
Resumo:
The objective of great investments in telecommunication networks is to approach economies and put an end to the asymmetries. The most isolated regions could be the beneficiaries of this new technological investments wave disseminating trough the territories. The new economic scenarios created by globalisation make high capacity backbones and coherent information society polity, two instruments that could change regions fate and launch them in to an economic development context. Technology could bring international projection to services or products and could be the differentiating element between a national and an international economic strategy. So, the networks and its fluxes are becoming two of the most important variables to the economies. Measuring and representing this new informational accessibility, mapping new communities, finding new patterns and localisation models, could be today’s challenge. In the physical and real space, location is defined by two or three geographical co-ordinates. In the network virtual space or in cyberspace, geography seems incapable to define location, because it doesn’t have a good model. Trying to solve the problem and based on geographical theories and concepts, new fields of study came to light. The Internet Geography, Cybergeography or Geography of Cyberspace are only three examples. In this paper and using Internet Geography and informational cartography, it was possible to observe and analyse the spacialisation of the Internet phenomenon trough the distribution of the IP addresses in the Portuguese territory. This work shows the great potential and applicability of this indicator to Internet dissemination and regional development studies. The Portuguese territory is seen in a completely new form: the IP address distribution of Country Code Top Level Domains (.pt) could show new regional hierarchies. The spatial concentration or dispersion of top level domains seems to be a good instrument to reflect the info-structural dynamic and economic development of a territory, especially at regional level.
Resumo:
Information Society plays an important role in all kinds of human activity, inducing new forms of economic and social organization and creating knowledge. Over the last twenty years of the 20th century, large investments in telecommunication networks were made to approach economies and put an end to the asymmetries. The most isolated regions were the beneficiaries of this new technological investment’s wave disseminating trough the territories. The new economic scenarios created by globalisation make high capacity backbones and coherent information society polity, two instruments that could change regions fate and launch them in to an economic development context. Technology could bring international projection to services, products and could be the differentiating element between a national and an international economic strategy. So, the networks and its fluxes are becoming two of the most important variables to the economies. Measuring and representing this new informational accessibility, mapping new communities, finding new patterns and localisation models, could be today’s challenge. In the physical/real space, location is defined by two or three geographical co-ordinates. In the network/virtual space or in cyberspace, geography seems incapable to define location, because it doesn’t have a good model. Trying to solve the problem and based on geographical theories and concepts, new fields of study came to light. Internet Geography is one example. In this paper and using Internet Geography and informational cartography, it was possible to observe and analyse the spacialisation of the Internet phenomenon trough the distribution of the IP addresses in the Portuguese territory. This work shows the great potential and applicability of this indicator to regional development studies, and at the same time. The IP address distribution of Country Code Top Level Domains (.pt for Portugal) could show the same economic patterns, reflecting territorial inflexibility or, by opposition, new regional hierarchies. The spatial concentration or dispersion of top level domains seems to be a good instrument to analyse the info-structural dynamic and economic development of a territory, especially at regional level. At the same time it shows that information technologies are essential to innovation and competitive advantage.
Resumo:
This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.
Resumo:
Planners in public and private institutions would like coherent forecasts of the components of age-specic mortality, such as causes of death. This has been di cult toachieve because the relative values of the forecast components often fail to behave ina way that is coherent with historical experience. In addition, when the group forecasts are combined the result is often incompatible with an all-groups forecast. It hasbeen shown that cause-specic mortality forecasts are pessimistic when compared withall-cause forecasts (Wilmoth, 1995). This paper abandons the conventional approachof using log mortality rates and forecasts the density of deaths in the life table. Sincethese values obey a unit sum constraint for both conventional single-decrement life tables (only one absorbing state) and multiple-decrement tables (more than one absorbingstate), they are intrinsically relative rather than absolute values across decrements aswell as ages. Using the methods of Compositional Data Analysis pioneered by Aitchison(1986), death densities are transformed into the real space so that the full range of multivariate statistics can be applied, then back-transformed to positive values so that theunit sum constraint is honoured. The structure of the best-known, single-decrementmortality-rate forecasting model, devised by Lee and Carter (1992), is expressed incompositional form and the results from the two models are compared. The compositional model is extended to a multiple-decrement form and used to forecast mortalityby cause of death for Japan
Resumo:
Kriging is an interpolation technique whose optimality criteria are based on normality assumptions either for observed or for transformed data. This is the case of normal, lognormal and multigaussian kriging.When kriging is applied to transformed scores, optimality of obtained estimators becomes a cumbersome concept: back-transformed optimal interpolations in transformed scores are not optimal in the original sample space, and vice-versa. This lack of compatible criteria of optimality induces a variety of problems in both point and block estimates. For instance, lognormal kriging, widely used to interpolate positivevariables, has no straightforward way to build consistent and optimal confidence intervals for estimates.These problems are ultimately linked to the assumed space structure of the data support: for instance, positive values, when modelled with lognormal distributions, are assumed to be embedded in the whole real space, with the usual real space structure and Lebesgue measure
Resumo:
Low concentrations of elements in geochemical analyses have the peculiarity of beingcompositional data and, for a given level of significance, are likely to be beyond thecapabilities of laboratories to distinguish between minute concentrations and completeabsence, thus preventing laboratories from reporting extremely low concentrations of theanalyte. Instead, what is reported is the detection limit, which is the minimumconcentration that conclusively differentiates between presence and absence of theelement. A spatially distributed exhaustive sample is employed in this study to generateunbiased sub-samples, which are further censored to observe the effect that differentdetection limits and sample sizes have on the inference of population distributionsstarting from geochemical analyses having specimens below detection limit (nondetects).The isometric logratio transformation is used to convert the compositional data in thesimplex to samples in real space, thus allowing the practitioner to properly borrow fromthe large source of statistical techniques valid only in real space. The bootstrap method isused to numerically investigate the reliability of inferring several distributionalparameters employing different forms of imputation for the censored data. The casestudy illustrates that, in general, best results are obtained when imputations are madeusing the distribution best fitting the readings above detection limit and exposes theproblems of other more widely used practices. When the sample is spatially correlated, itis necessary to combine the bootstrap with stochastic simulation
Resumo:
Test-based assessment tools are mostly focused on the use of computers. However, advanced Information and Communication Technologies, such as handheld devices, opens up the possibilities of creating new assessment scenarios, increasing the teachers’ choices to design more appropriate tests for their subject areas. In this paper we use the term Computing-Based Testing (CBT) instead of Computer-Based Testing, as it captures better the emerging trends. Within the CBT context, the paper is centred on proposing an approach for “Assessment in situ” activities, where questions have to be answered in front of a real space/location (situ). In particular, we present the QuesTInSitu software implementation that includes both an editor and a player based on the IMS Question and Test Interoperability specification and GoogleMaps. With QuesTInSitu teachers can create geolocated questions and tests (routes), and students can answer the tests using mobile devices with GPS when following a route. Three illustrating scenarios and the results from the implementation of one of them in a real educational situation show that QuesTInSitu enables the creation of innovative, enriched and context-aware assessment activities. The results also indicate that the use of mobile devices and location-based systems in assessment activities facilitates students to put explorative and spatial skills into practice and fosters their motivation, reflection and personal observation.
Resumo:
Until recently, the hard X-ray, phase-sensitive imaging technique called grating interferometry was thought to provide information only in real space. However, by utilizing an alternative approach to data analysis we demonstrated that the angular resolved ultra-small angle X-ray scattering distribution can be retrieved from experimental data. Thus, reciprocal space information is accessible by grating interferometry in addition to real space. Naturally, the quality of the retrieved data strongly depends on the performance of the employed analysis procedure, which involves deconvolution of periodic and noisy data in this context. The aim of this article is to compare several deconvolution algorithms to retrieve the ultra-small angle X-ray scattering distribution in grating interferometry. We quantitatively compare the performance of three deconvolution procedures (i.e., Wiener, iterative Wiener and Lucy-Richardson) in case of realistically modeled, noisy and periodic input data. The simulations showed that the algorithm of Lucy-Richardson is the more reliable and more efficient as a function of the characteristics of the signals in the given context. The availability of a reliable data analysis procedure is essential for future developments in grating interferometry.
Resumo:
Phenomena with a constrained sample space appear frequently in practice. This is the case e.g. with strictly positive data, or with compositional data, like percentages or proportions. If the natural measure of difference is not the absolute one, simple algebraic properties show that it is more convenient to work with a geometry different from the usual Euclidean geometry in real space, and with a measure different from the usual Lebesgue measure, leading to alternative models which better fit the phenomenon under study. The general approach is presented and illustrated using the normal distribution, both on the positive real line and on the D-part simplex. The original ideas of McAlister in his introduction to the lognormal distribution in 1879, are recovered and updated