926 resultados para linguistic reconstruction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of reconstruction of a refractive-index distribution (RID) in optical refraction tomography (ORT) with optical path-length difference (OPD) data is solved using two adaptive-estimation-based extended-Kalman-filter (EKF) approaches. First, a basic single-resolution EKF (SR-EKF) is applied to a state variable model describing the tomographic process, to estimate the RID of an optically transparent refracting object from noisy OPD data. The initialization of the biases and covariances corresponding to the state and measurement noise is discussed. The state and measurement noise biases and covariances are adaptively estimated. An EKF is then applied to the wavelet-transformed state variable model to yield a wavelet-based multiresolution EKF (MR-EKF) solution approach. To numerically validate the adaptive EKF approaches, we evaluate them with benchmark studies of standard stationary cases, where comparative results with commonly used efficient deterministic approaches can be obtained. Detailed reconstruction studies for the SR-EKF and two versions of the MR-EKF (with Haar and Daubechies-4 wavelets) compare well with those obtained from a typically used variant of the (deterministic) algebraic reconstruction technique, the average correction per projection method, thus establishing the capability of the EKF for ORT. To the best of our knowledge, the present work contains unique reconstruction studies encompassing the use of EKF for ORT in single-resolution and multiresolution formulations, and also in the use of adaptive estimation of the EKF's noise covariances. (C) 2010 Optical Society of America

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Solar ultraviolet (UV) radiation has a broad range of effects concerning life on Earth. Soon after the mid-1980s, it was recognized that the stratospheric ozone content was declining over large areas of the globe. Because the stratospheric ozone layer protects life on Earth from harmful UV radiation, this lead to concern about possible changes in the UV radiation due to anthropogenic activity. Initiated by this concern, many stations for monitoring of the surface UV radiation were founded in the late 1980s and early 1990s. As a consequence, there is an apparent lack of information on UV radiation further in the past: measurements cannot tell us how the UV radiation levels have changed on time scales of, for instance, several decades. The aim of this thesis was to improve our understanding of past variations in the surface UV radiation by developing techniques for UV reconstruction. Such techniques utilize commonly available meteorological data together with measurements of the total ozone column for reconstructing, or estimating, the amount of UV radiation reaching Earth's surface in the past. Two different techniques for UV reconstruction were developed. Both are based on first calculating the clear-sky UV radiation using a radiative transfer model. The clear-sky value is then corrected for the effect of clouds based on either (i) sunshine duration or (ii) pyranometer measurements. Both techniques account also for the variations in the surface albedo caused by snow, whereas aerosols are included as a typical climatological aerosol load. Using these methods, long time series of reconstructed UV radiation were produced for five European locations, namely Sodankylä and Jokioinen in Finland, Bergen in Norway, Norrköping in Sweden, and Davos in Switzerland. Both UV reconstruction techniques developed in this thesis account for the greater part of the factors affecting the amount of UV radiation reaching the Earth's surface. Thus, they are considered reliable and trustworthy, as suggested also by the good performance of the methods. The pyranometer-based method shows better performance than the sunshine-based method, especially for daily values. For monthly values, the difference between the performances of the methods is smaller, indicating that the sunshine-based method is roughly as good as the pyranometer-based for assessing long-term changes in the surface UV radiation. The time series of reconstructed UV radiation produced in this thesis provide new insight into the past UV radiation climate and how the UV radiation has varied throughout the years. Especially the sunshine-based UV time series, extending back to 1926 and 1950 at Davos and Sodankylä, respectively, also put the recent changes driven by the ozone decline observed over the last few decades into perspective. At Davos, the reconstructed UV over the period 1926-2003 shows considerable variation throughout the entire period, with high values in the mid-1940s, early 1960s, and in the 1990s. Moreover, the variations prior to 1980 were found to be caused primarily by variations in the cloudiness, while the increase of 4.5 %/decade over the period 1979-1999 was supported by both the decline in the total ozone column and changes in the cloudiness. Of the other stations included in this work, both Sodankylä and Norrköping show a clear increase in the UV radiation since the early 1980s (3-4 %/decade), driven primarily by changes in the cloudiness, and to a lesser extent by the diminution of the total ozone. At Jokioinen, a weak increase was found, while at Bergen there was no considerable overall change in the UV radiation level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aujourd'hui, techniques et technologies interagissent avec le corps humain et donnent aux personnes la possibilité de reconstruire leur corps, mais aussi de l'améliorer et de l'augmenter. L'hybridation est un processus technologique visant à compenser les défaillances humaines. L'augmentation de la puissance d'être est exaltée (santé, sexualité, performance, jeunesse), pourtant son accès n'est pas pour tous. Ce livre propose de démêler les différentes représentations du corps hybride et les projets qui les sous-tendent.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The book presents a reconstruction, interpretation and critical evaluation of the Schumpeterian theoretical approach to socio-economic change. The analysis focuses on the problem of social evolution, on the interpretation of the innovation process and business cycles and, finally, on Schumpeter s optimistic neglect of ecological-environmental conditions as possible factors influencing social-economic change. The author investigates how the Schumpeterian approach describes the process of social and economic evolution, and how the logic of transformations is described, explained and understood in the Schumpeterian theory. The material of the study includes Schumpeter s works written after 1925, a related part of the commentary literature on these works, and a selected part of the related literature on the innovation process, technological transformations and the problem of long waves. Concerning the period after 1925, the Schumpeterian oeuvre is conceived and analysed as a more or less homogenous corpus of texts. The book is divided into 9 chapters. Chapters 1-2 describe the research problems and methods. Chapter 3 is an effort to provide a systematic reconstruction of Schumpeter's ideas concerning social and economic evolution. Chapters 4 and 5 focus their analysis on the innovation process. In Chapters 6 and 7 Schumpeter's theory of business cycles is examined. Chapter 8 evaluates Schumpeter's views concerning his relative neglect of ecological-environmental conditions as possible factors influencing social-economic change. Finally, chapter 9 draws the main conclusions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have presented an overview of the FSIG approach and related FSIG gram- mars to issues of very low complexity and parsing strategy. We ended up with serious optimism according to which most FSIG grammars could be decom- posed in a reasonable way and then processed efficiently.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although we have seen a proliferation of studies examining the discursive aspects of strategy, the full potential of the linguistic turn has not yet been realized. This paper argues for a multifaceted interdiscursive approach that can help to go beyond simplistic views on strategy as unified discourse and pave the way for the new research efforts. At the meta-level, it is important to focus attention on struggles over competing conceptions of strategy in this body of knowledge. At the meso-level it is interesting to examine alternative strategy narratives to better understand the polyphony and dialogicality in organizational strategizing. At the micro-level, it is useful to reflect on the rhetorical tactics and skills that are used in strategy conversations to promote or resist specific views. This paper calls for new focused analyses at these different levels of analysis, but also for studies of the processes linking these levels.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In positron emission tomography (PET), image reconstruction is a demanding problem. Since, PET image reconstruction is an ill-posed inverse problem, new methodologies need to be developed. Although previous studies show that incorporation of spatial and median priors improves the image quality, the image artifacts such as over-smoothing and streaking are evident in the reconstructed image. In this work, we use a simple, yet powerful technique to tackle the PET image reconstruction problem. Proposed technique is based on the integration of Bayesian approach with that of finite impulse response (FIR) filter. A FIR filter is designed whose coefficients are determined based on the surface diffusion model. The resulting reconstructed image is iteratively filtered and fed back to obtain the new estimate. Experiments are performed on a simulated PET system. The results show that the proposed approach is better than recently proposed MRP algorithm in terms of image quality and normalized mean square error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Non-uniform sampling of a signal is formulated as an optimization problem which minimizes the reconstruction signal error. Dynamic programming (DP) has been used to solve this problem efficiently for a finite duration signal. Further, the optimum samples are quantized to realize a speech coder. The quantizer and the DP based optimum search for non-uniform samples (DP-NUS) can be combined in a closed-loop manner, which provides distinct advantage over the open-loop formulation. The DP-NUS formulation provides a useful control over the trade-off between bitrate and performance (reconstruction error). It is shown that 5-10 dB SNR improvement is possible using DP-NUS compared to extrema sampling approach. In addition, the close-loop DP-NUS gives a 4-5 dB improvement in reconstruction error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine institutional work from a discursive perspective and argue that reasonability, the existence of acceptable justifying reasons for beliefs and practices, is a key part of legitimation. Drawing on philosophy of language, we maintain that institutional work takes place in the context of ‘space of reasons’ determined by widely held assumptions about what is reasonable and what is not. We argue that reasonability provides the main contextual constraint of institutional work, its major outcome, and a key trigger for actors to engage in it. We draw on Hilary Putnam’s concept ‘division of linguistic labor’ to highlight the specialized distribution of knowledge and authority in defining valid ways of reasoning. In this view, individuals use institutionalized vocabularies to reason about their choices and understand their context with limited understanding of how and why these structures have become what they are. We highlight the need to understand how professions and other actors establish and maintain the criteria of reasoning in various areas of expertise through discursive institutional work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this dissertation I study language complexity from a typological perspective. Since the structuralist era, it has been assumed that local complexity differences in languages are balanced out in cross-linguistic comparisons and that complexity is not affected by the geopolitical or sociocultural aspects of the speech community. However, these assumptions have seldom been studied systematically from a typological point of view. My objective is to define complexity so that it is possible to compare it across languages and to approach its variation with the methods of quantitative typology. My main empirical research questions are: i) does language complexity vary in any systematic way in local domains, and ii) can language complexity be affected by the geographical or social environment? These questions are studied in three articles, whose findings are summarized in the introduction to the dissertation. In order to enable cross-language comparison, I measure complexity as the description length of the regularities in an entity; I separate it from difficulty, focus on local instead of global complexity, and break it up into different types. This approach helps avoid the problems that plagued earlier metrics of language complexity. My approach to grammar is functional-typological in nature, and the theoretical framework is basic linguistic theory. I delimit the empirical research functionally to the marking of core arguments (the basic participants in the sentence). I assess the distributions of complexity in this domain with multifactorial statistical methods and use different sampling strategies, implementing, for instance, the Greenbergian view of universals as diachronic laws of type preference. My data come from large and balanced samples (up to approximately 850 languages), drawn mainly from reference grammars. The results suggest that various significant trends occur in the marking of core arguments in regard to complexity and that complexity in this domain correlates with population size. These results provide evidence that linguistic patterns interact among themselves in terms of complexity, that language structure adapts to the social environment, and that there may be cognitive mechanisms that limit complexity locally. My approach to complexity and language universals can therefore be successfully applied to empirical data and may serve as a model for further research in these areas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In rapid parallel magnetic resonance imaging, the problem of image reconstruction is challenging. Here, a novel image reconstruction technique for data acquired along any general trajectory in neural network framework, called ``Composite Reconstruction And Unaliasing using Neural Networks'' (CRAUNN), is proposed. CRAUNN is based on the observation that the nature of aliasing remains unchanged whether the undersampled acquisition contains only low frequencies or includes high frequencies too. Here, the transformation needed to reconstruct the alias-free image from the aliased coil images is learnt, using acquisitions consisting of densely sampled low frequencies. Neural networks are made use of as machine learning tools to learn the transformation, in order to obtain the desired alias-free image for actual acquisitions containing sparsely sampled low as well as high frequencies. CRAUNN operates in the image domain and does not require explicit coil sensitivity estimation. It is also independent of the sampling trajectory used, and could be applied to arbitrary trajectories as well. As a pilot trial, the technique is first applied to Cartesian trajectory-sampled data. Experiments performed using radial and spiral trajectories on real and synthetic data, illustrate the performance of the method. The reconstruction errors depend on the acceleration factor as well as the sampling trajectory. It is found that higher acceleration factors can be obtained when radial trajectories are used. Comparisons against existing techniques are presented. CRAUNN has been found to perform on par with the state-of-the-art techniques. Acceleration factors of up to 4, 6 and 4 are achieved in Cartesian, radial and spiral cases, respectively. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Grad–Shafranov reconstruction is a method of estimating the orientation (invariant axis) and cross section of magnetic flux ropes using the data from a single spacecraft. It can be applied to various magnetic structures such as magnetic clouds (MCs) and flux ropes embedded in the magnetopause and in the solar wind. We develop a number of improvements of this technique and show some examples of the reconstruction procedure of interplanetary coronal mass ejections (ICMEs) observed at 1 AU by the STEREO, Wind, and ACE spacecraft during the minimum following Solar Cycle 23. The analysis is conducted not only for ideal localized ICME events but also for non-trivial cases of magnetic clouds in fast solar wind. The Grad–Shafranov reconstruction gives reasonable results for the sample events, although it possesses certain limitations, which need to be taken into account during the interpretation of the model results.