940 resultados para texture-defined (second-order) information
Resumo:
Purpose: Development of an interpolation algorithm for re‐sampling spatially distributed CT‐data with the following features: global and local integral conservation, avoidance of negative interpolation values for positively defined datasets and the ability to control re‐sampling artifacts. Method and Materials: The interpolation can be separated into two steps: first, the discrete CT‐data has to be continuously distributed by an analytic function considering the boundary conditions. Generally, this function is determined by piecewise interpolation. Instead of using linear or high order polynomialinterpolations, which do not fulfill all the above mentioned features, a special form of Hermitian curve interpolation is used to solve the interpolation problem with respect to the required boundary conditions. A single parameter is determined, by which the behavior of the interpolation function is controlled. Second, the interpolated data have to be re‐distributed with respect to the requested grid. Results: The new algorithm was compared with commonly used interpolation functions based on linear and second order polynomial. It is demonstrated that these interpolation functions may over‐ or underestimate the source data by about 10%–20% while the parameter of the new algorithm can be adjusted in order to significantly reduce these interpolation errors. Finally, the performance and accuracy of the algorithm was tested by re‐gridding a series of X‐ray CT‐images. Conclusion: Inaccurate sampling values may occur due to the lack of integral conservation. Re‐sampling algorithms using high order polynomialinterpolation functions may result in significant artifacts of the re‐sampled data. Such artifacts can be avoided by using the new algorithm based on Hermitian curve interpolation
Resumo:
The performance of memory-guided saccades with two different delays (3 s and 30 s of memorisation) was studied in eight subjects. Single pulse transcranial magnetic stimulation (TMS) was applied simultaneously over the left and right dorsolateral prefrontal cortex (DLPFC) 1 s after target presentation. In both delays, stimulation significantly increased the percentage of error in amplitude of memory-guided saccades. Furthermore, the interfering effect of TMS was significantly higher in the short delay compared to that of the long delay paradigm. The results are discussed in the context of a mixed model of spatial working memory control including two components: First, serial information processing with a predominant role of the DLPFC during the early period of memorisation and, second, parallel information processing, which is independent from the DLPFC, operating during longer delays.
Resumo:
Previous studies have either exclusively used annual tree-ring data or have combined tree-ring series with other, lower temporal resolution proxy series. Both approaches can lead to significant uncertainties, as tree-rings may underestimate the amplitude of past temperature variations, and the validity of non-annual records cannot be clearly assessed. In this study, we assembled 45 published Northern Hemisphere (NH) temperature proxy records covering the past millennium, each of which satisfied 3 essential criteria: the series must be of annual resolution, span at least a thousand years, and represent an explicit temperature signal. Suitable climate archives included ice cores, varved lake sediments, tree-rings and speleothems. We reconstructed the average annual land temperature series for the NH over the last millennium by applying 3 different reconstruction techniques: (1) principal components (PC) plus second-order autoregressive model (AR2), (2) composite plus scale (CPS) and (3) regularized errors-in-variables approach (EIV). Our reconstruction is in excellent agreement with 6 climate model simulations (including the first 5 models derived from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and an earth system model of intermediate complexity (LOVECLIM), showing similar temperatures at multi-decadal timescales; however, all simulations appear to underestimate the temperature during the Medieval Warm Period (MWP). A comparison with other NH reconstructions shows that our results are consistent with earlier studies. These results indicate that well-validated annual proxy series should be used to minimize proxy-based artifacts, and that these proxy series contain sufficient information to reconstruct the low-frequency climate variability over the past millennium.
Resumo:
The present study investigated the relationship between psychometric intelligence and temporal resolution power (TRP) as simultaneously assessed by auditory and visual psychophysical timing tasks. In addition, three different theoretical models of the functional relationship between TRP and psychometric intelligence as assessed by means of the Adaptive Matrices Test (AMT) were developed. To test the validity of these models, structural equation modeling was applied. Empirical data supported a hierarchical model that assumed auditory and visual modality-specific temporal processing at a first level and amodal temporal processing at a second level. This second-order latent variable was substantially correlated with psychometric intelligence. Therefore, the relationship between psychometric intelligence and psychophysical timing performance can be explained best by a hierarchical model of temporal information processing.
Resumo:
In order to study further the long-range correlations ("ridge") observed recently in p+Pb collisions at sqrt(s_NN) =5.02 TeV, the second-order azimuthal anisotropy parameter of charged particles, v_2, has been measured with the cumulant method using the ATLAS detector at the LHC. In a data sample corresponding to an integrated luminosity of approximately 1 microb^(-1), the parameter v_2 has been obtained using two- and four-particle cumulants over the pseudorapidity range |eta|<2.5. The results are presented as a function of transverse momentum and the event activity, defined in terms of the transverse energy summed over 3.1
Resumo:
A new research project has, quite recently, been launched to clarify how different, from systems in second order number theory extending ACA 0, those in second order set theory extending NBG (as well as those in n + 3-th order number theory extending the so-called Bernays−Gödel expansion of full n + 2-order number theory etc.) are. In this article, we establish the equivalence between Δ10\bf-LFP and Δ10\bf-FP, which assert the existence of a least and of a (not necessarily least) fixed point, respectively, for positive elementary operators (or between Δn+20\bf-LFP and Δn+20\bf-FP). Our proof also shows the equivalence between ID 1 and ^ID1, both of which are defined in the standard way but with the starting theory PA replaced by ZFC (or full n + 2-th order number theory with global well-ordering).
Resumo:
Deuterium (δD) and oxygen (δ18O) isotopes are powerful tracers of the hydrological cycle and have been extensively used for paleoclimate reconstructions as they can provide information on past precipitation, temperature and atmospheric circulation. More recently, the use of δ17O excess derived from precise measurement of δ17O and δ18O gives new and additional insights in tracing the hydrological cycle whereas uncertainties surround this proxy. However, 17O excess could provide additional information on the atmospheric conditions at the moisture source as well as about fractionations associated with transport and site processes. In this paper we trace water stable isotopes (δD,δ17O and δ18O) along their path from precipitation to cave drip water and finally to speleothem fluid inclusions for Milandre cave in northwestern Switzerland. A two year-long daily resolved precipitation isotope record close to the cave site is compared to collected cave drip water (3 months average resolution) and fluid inclusions of modern and Holocene stalagmites. Amount weighted mean δD,δ18O and δ17O are -71.0‰, -9.9‰, -5.2‰ for precipitation, -60.3‰, -8.7‰, -4.6‰ for cave drip water and -61.3‰, -8.3‰, -4.7‰ for recent fluid inclusions respectively. Second order parameters have also been derived in precipitation and drip water and present similar values with 18 per meg for 17O excess whereas d-excess is 1.5‰ more negative in drip water. Furthermore, the atmospheric signal is shifted towards enriched values in the drip water and fluid inclusions (Δ of ~ + 10‰ for δD). The isotopic composition of cave drip water exhibits a weak seasonal signal which is shifted by around 8 - 10 months (groundwater residence time) when compared to the precipitation. Moreover, we carried out the first δ17O measurement in speleothem fluid inclusions, as well as the first comparison of the δ17 O behaviour from the meteoric water to the fluid inclusions entrapment in speleothems. This study on precipitation, drip water and fluid inclusions will be used as a speleothem proxy calibration for Milandre cave in order to reconstruct paleotemperatures and moisture source variations for Western Central Europe.
On degeneracy and invariances of random fields paths with applications in Gaussian process modelling
Resumo:
We study pathwise invariances and degeneracies of random fields with motivating applications in Gaussian process modelling. The key idea is that a number of structural properties one may wish to impose a priori on functions boil down to degeneracy properties under well-chosen linear operators. We first show in a second order set-up that almost sure degeneracy of random field paths under some class of linear operators defined in terms of signed measures can be controlled through the two first moments. A special focus is then put on the Gaussian case, where these results are revisited and extended to further linear operators thanks to state-of-the-art representations. Several degeneracy properties are tackled, including random fields with symmetric paths, centred paths, harmonic paths, or sparse paths. The proposed approach delivers a number of promising results and perspectives in Gaussian process modelling. In a first numerical experiment, it is shown that dedicated kernels can be used to infer an axis of symmetry. Our second numerical experiment deals with conditional simulations of a solution to the heat equation, and it is found that adapted kernels notably enable improved predictions of non-linear functionals of the field such as its maximum.
Resumo:
A three-level satellite to ground monitoring scheme for conservation easement monitoring has been implemented in which high-resolution imagery serves as an intermediate step for inspecting high priority sites. A digital vertical aerial camera system was developed to fulfill the need for an economical source of imagery for this intermediate step. A method for attaching the camera system to small aircraft was designed, and the camera system was calibrated and tested. To ensure that the images obtained were of suitable quality for use in Level 2 inspections, rectified imagery was required to provide positional accuracy of 5 meters or less to be comparable to current commercially available high-resolution satellite imagery. Focal length calibration was performed to discover the infinity focal length at two lens settings (24mm and 35mm) with a precision of O.1mm. Known focal length is required for creation of navigation points representing locations to be photographed (waypoints). Photographing an object of known size at distances on a test range allowed estimates of focal lengths of 25.lmm and 35.4mm for the 24mm and 35mm lens settings, respectively. Constants required for distortion removal procedures were obtained using analytical plumb-line calibration procedures for both lens settings, with mild distortion at the 24mm setting and virtually no distortion found at the 35mm setting. The system was designed to operate in a series of stages: mission planning, mission execution, and post-mission processing. During mission planning, waypoints were created using custom tools in geographic information system (GIs) software. During mission execution, the camera is connected to a laptop computer with a global positioning system (GPS) receiver attached. Customized mobile GIs software accepts position information from the GPS receiver, provides information for navigation, and automatically triggers the camera upon reaching the desired location. Post-mission processing (rectification) of imagery for removal of lens distortion effects, correction of imagery for horizontal displacement due to terrain variations (relief displacement), and relating the images to ground coordinates were performed with no more than a second-order polynomial warping function. Accuracy testing was performed to verify the positional accuracy capabilities of the system in an ideal-case scenario as well as a real-world case. Using many welldistributed and highly accurate control points on flat terrain, the rectified images yielded median positional accuracy of 0.3 meters. Imagery captured over commercial forestland with varying terrain in eastern Maine, rectified to digital orthophoto quadrangles, yielded median positional accuracies of 2.3 meters with accuracies of 3.1 meters or better in 75 percent of measurements made. These accuracies were well within performance requirements. The images from the digital camera system are of high quality, displaying significant detail at common flying heights. At common flying heights the ground resolution of the camera system ranges between 0.07 meters and 0.67 meters per pixel, satisfying the requirement that imagery be of comparable resolution to current highresolution satellite imagery. Due to the high resolution of the imagery, the positional accuracy attainable, and the convenience with which it is operated, the digital aerial camera system developed is a potentially cost-effective solution for use in the intermediate step of a satellite to ground conservation easement monitoring scheme.
Resumo:
Continuando un artículo anterior en el que se definió el concepto de enfermedad, en orden al esclarecimiento del de enfermedad mental, en el presente estudio se desarrolla el tema de aquellas que pueden ser llamadas enfermedades en el estricto sentido del término. En un primer parágrafo, se mencionan los trastornos que son mencionados por Santo Tomás, y se los explica colocándolos en el contexto de la medicina medieval, con particular referencia al Canon de Medicina de Avicena. En un segundo parágrafo, se desarrolla el tema de las enfermedades psicosomáticas, es decir aquellas causadas por una “pasión animal". Se deja para un tercer artículo el esclarecimiento de la naturaleza de lo que el Aquinate llama “aegritudo animalis".
Resumo:
More than 2000 turbidite, debris-flow, and slump deposits recovered at Site 823 record the history of the Queensland Trough since the middle Miocene and provide new insights about turbidites, debris flow, and slump deposits (herein termed gravity deposits). Changes in the composition and nature of gravity deposits through time can be related to tectonic movements, fluctuations in eustatic sea level, and sedimentological factors. The Queensland Trough is a long, relatively narrow, structural depression that formed as a result of Cretaceous to Tertiary rifting of the northeastern Australia continental margin. Thus, tectonics established the geometry of this marginal basin, and its steep slopes set the stage for repeated slope failures. Seismic data indicate that renewed faulting, subsidence, and associated tectonic tilting occurred during the early late Miocene (continuing into the early Pliocene), resulting in unstable slopes that were prone to slope failures and to generation of gravity deposits. Tectonic subsidence, together with a second-order eustatic highstand, resulted in platform drowning during the late Miocene. The composition of turbidites reflects their origin and provides insights about the nature of sedimentation on adjacent shelf areas. During relative highstands and times of platform drowning, planktonic foraminifers were reworked from slopes and/or drowned shelves and were redeposited in turbidites. During relative lowstands, quartz and other terrigenous sediment was shed into the basin. Quartzose turbidites and clay-rich hemipelagic muds also can record increased supply of terrigenous sediment from mainland Australia. Limestone fragments were eroded from carbonate platforms until the drowned platforms were buried under hemipelagic sediments following the late Miocene drowning event. Bioclastic grains and neritic foraminifers were reworked from neritic shelves during relative lowstands. During the late Pliocene (2.6 Ma), the increased abundance of bioclasts and quartz in turbidites signaled the shallowing and rejuvenation of the northeastern Australia continental shelf. However, a one-for-one relationship cannot be recognized between eustatic sea-level fluctuations and any single sedimentologic parameter. Perhaps, tectonism and sedimentological factors along the Queensland Trough played an equally important role in generating gravity deposits. Turbidites and other gravity deposits (such as those at Site 823) do not necessarily represent submarine fan deposits, particularly if they are composed of hemipelagic sediments reworked from drowned platforms and slopes. When shelves are drowned and terrigenous sediment is not directly supplied by nearby rivers/point sources, muddy terrigenous sediments blanket the entire slope and basin, rather than forming localized fans. Slope failures affect the entire slope, rather than localized submarine canyons. Slopes may become destabilized as a result of tectonic activity, inherent sediment weaknesses, and/or during relative sea-level lowstands. For this reason, sediment deposits in this setting reflect tectonic and eustatic events that caused slope instabilities, rather than migration of different submarine fan facies.
Resumo:
This paper presents the 2005 Miracle’s team approach to the Ad-Hoc Information Retrieval tasks. The goal for the experiments this year was twofold: to continue testing the effect of combination approaches on information retrieval tasks, and improving our basic processing and indexing tools, adapting them to new languages with strange encoding schemes. The starting point was a set of basic components: stemming, transforming, filtering, proper nouns extraction, paragraph extraction, and pseudo-relevance feedback. Some of these basic components were used in different combinations and order of application for document indexing and for query processing. Second-order combinations were also tested, by averaging or selective combination of the documents retrieved by different approaches for a particular query. In the multilingual track, we concentrated our work on the merging process of the results of monolingual runs to get the overall multilingual result, relying on available translations. In both cross-lingual tracks, we have used available translation resources, and in some cases we have used a combination approach.
Resumo:
This article proposes a MAS architecture for network diagnosis under uncertainty. Network diagnosis is divided into two inference processes: hypothesis generation and hypothesis confirmation. The first process is distributed among several agents based on a MSBN, while the second one is carried out by agents using semantic reasoning. A diagnosis ontology has been defined in order to combine both inference processes. To drive the deliberation process, dynamic data about the influence of observations are taken during diagnosis process. In order to achieve quick and reliable diagnoses, this influence is used to choose the best action to perform. This approach has been evaluated in a P2P video streaming scenario. Computational and time improvements are highlight as conclusions.
Resumo:
The Internal Structure of Hydrogen-Air Diffusion Flames. Tho purpose of this paper is to study finite rate chemistry effects in diffusion controlled hydrogenair flames undor conditions appearing in some cases in a supersonic combustor. Since for large reaction rates the flame is close to chemical equilibrium, the reaction takes place in a very thin region, so thata "singular perturbation "treatment" of the problem seems appropriate. It has been shown previously that, within the inner or reaction zone, convection effects may be neglocted, the temperature is constant across the flame, and tho mass fraction distributions are given by ordinary differential equations, whore tho only independent variable involved is tho coordinate normal to the flame surface. Tho solution of the outer problom, which is a pure mixing problem with the additional condition that fuol and oxidizer do not coexist in any zone, provides t h e following information: tho flame position, rates of fuel consumption, temperature, concentrators of species, fluid velocity outside of tho flame, and the boundary conditions required to solve the "inner problem." The main contribution of this paper consists in the introduction of a fairly complicated chemical kinetic scheme representing hydrogen-oxygen reaction. The nonlinear equations expressing the conservation of chemical species are approximately integrated by means of an integral method. It has boen found that, in the case considered of a near-equilibrium diffusion flame, tho role played by the dissociation-recombination reactions is purely marginal, and that somo of the second order "shuffling" reactions are close to equilibrium. The method shown here may be applied to compute the distanco from the injector corresponding to a given separation from equilibrium, say ten to twenty percent. For the casos whore this length is a small fraction of the combustion zone length, the equilibrium treatment describes properly tho flame behavior.
Resumo:
Fission product yields are fundamental parameters for several nuclear engineering calculations and in particular for burn-up/activation problems. The impact of their uncertainties was widely studied in the past and valuations were released, although still incomplete. Recently, the nuclear community expressed the need for full fission yield covariance matrices to produce inventory calculation results that take into account the complete uncertainty data. In this work, we studied and applied a Bayesian/generalised least-squares method for covariance generation, and compared the generated uncertainties to the original data stored in the JEFF-3.1.2 library. Then, we focused on the effect of fission yield covariance information on fission pulse decay heat results for thermal fission of 235U. Calculations were carried out using different codes (ACAB and ALEPH-2) after introducing the new covariance values. Results were compared with those obtained with the uncertainty data currently provided by the library. The uncertainty quantification was performed with the Monte Carlo sampling technique. Indeed, correlations between fission yields strongly affect the statistics of decay heat. Introduction Nowadays, any engineering calculation performed in the nuclear field should be accompanied by an uncertainty analysis. In such an analysis, different sources of uncertainties are taken into account. Works such as those performed under the UAM project (Ivanov, et al., 2013) treat nuclear data as a source of uncertainty, in particular cross-section data for which uncertainties given in the form of covariance matrices are already provided in the major nuclear data libraries. Meanwhile, fission yield uncertainties were often neglected or treated shallowly, because their effects were considered of second order compared to cross-sections (Garcia-Herranz, et al., 2010). However, the Working Party on International Nuclear Data Evaluation Co-operation (WPEC)