975 resultados para normalization constant


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Finsler space is said to be geodesically reversible if each oriented geodesic can be reparametrized as a geodesic with the reverse orientation. A reversible Finsler space is geodesically reversible, but the converse need not be true. In this note, building on recent work of LeBrun and Mason, it is shown that a geodesically reversible Finsler metric of constant flag curvature on the 2-sphere is necessarily projectively flat. As a corollary, using a previous result of the author, it is shown that a reversible Finsler metric of constant flag curvature on the 2-sphere is necessarily a Riemannian metric of constant Gauss curvature, thus settling a long- standing problem in Finsler geometry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Travail créatif / Creative Work

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Excess Thorium-230 (230Thxs) as a constant flux tracer is an essential tool for paleoceanographic studies, but its limitations for flux normalization are still a matter of debate. In regions of rapid sediment accumulation, it has been an open question if 230Thxs-normalized fluxes are biased by particle sorting effects during sediment redistribution. In order to study the sorting effect of sediment transport on 230Thxs, we analyzed the specific activity of 230Thxs in different particle size classes of carbonate-rich sediments from the South East Atlantic, and of opal-rich sediments from the Atlantic sector of the Southern Ocean. At both sites, we compare the 230Thxs distribution in neighboring high vs. low accumulation settings. Two grain-size fractionation methods are explored. We find that the 230Thxs distribution is strongly grain size dependent, and 50-90% of the total 230Thxs inventory is concentrated in fine material smaller than 10 µm, which is preferentially deposited at the high accumulation sites. This leads to an overestimation of the focusing factor Psi, and consequently to an underestimation of the vertical flux rate at such sites. The distribution of authigenic uranium indicates that fine organic-rich material has also been re-deposited from lateral sources. If the particle sorting effect is considered in the flux calculations, it reduces the estimated extent of sediment focusing. In order to assess the maximum effect of particle sorting on Psi, we present an extreme scenario, in which we assume a lateral sediment supply of only fine material (< 10 µm). In this case, the focusing factor of the opal-rich core would be reduced from Psi = 5.9 to Psi = 3.2. In a more likely scenario, allowing silt-sized material to be transported, Psi is reduced from 5.9 to 5.0 if particle sorting is taken into consideration. The bias introduced by particle sorting is most important for strongly focused sediments. Comparing 230Thxs-normalized mass fluxes biased by sorting effects with uncorrected mass fluxes, we suggest that 230Thxs-normalization is still a valid tool to correct for lateral sediment redistribution. However, differences in focusing factors between core locations have to be evaluated carefully, taking the grain size distributions into consideration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

News agencies compete for a foothold as providers of information and mass media. Covered by a technological class infrastructure, Associated Press, Reuters, Agence FrancePresse (AFP) and EFE are leaders of the global media system because they introduce revolutionary changes in their production routines, professional culture, journalistic genres and styles; also for its innovative product offerings and services. This article also focuses on the strategies of the agencies to get closer to their audiences, from the agreements established and the treatment of very specific themes. Some solutions that contribute to the future survival of these entities are also proposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article introduces the concept of an emerging shared austerity reality, which refers to the socio-economic context of austerity that is shared both by social workers and service users, albeit to different degrees. Traditionally, the concept of the shared reality has been utilized to encompass the experiences of welfare professionals working in situations where both they and service users are exposed to the adverse effects of a natural disaster, war or terrorist attack. Here, the concept of shared reality is expanded through the introduction of the context of austerity. Drawing on 21 in-depth interviews with public sector social work practitioners in Greece it discusses, among other things, social anxieties about their children’s future, and their inability to take care of their elderly relatives that suggest an emerging shared austerity reality, reflecting the deterioration of socio-economic conditions. The paper ends with a discussion about the possibilities of alliance and division that emerge from the concept and future research directions. Moreover, it concludes with a reflection on the role of the social work profession and recent political developments in Greece in anti-austerity struggles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Forced convection heat transfer in a micro-channel filled with a porous material saturated with rarefied gas with internal heat generation is studied analytically in this work. The study is performed by analysing the boundary conditions for constant wall heat flux under local thermal non-equilibrium (LTNE) conditions. Invoking the velocity slip and temperature jump, the thermal behaviour of the porous-fluid system is studied by considering thermally and hydrodynamically fully-developed conditions. The flow inside the porous material is modelled by the Darcy–Brinkman equation. Exact solutions are obtained for both the fluid and solid temperature distributions for two primary approaches models A and B using constant wall heat flux boundary conditions. The temperature distributions and Nusselt numbers for models A and B are compared, and the limiting cases resulting in the convergence or divergence of the two models are also discussed. The effects of pertinent parameters such as fluid to solid effective thermal conductivity ratio, Biot number, Darcy number, velocity slip and temperature jump coefficients, and fluid and solid internal heat generations are also discussed. The results indicate that the Nusselt number decreases with the increase of thermal conductivity ratio for both models. This contrasts results from previous studies which for model A reported that the Nusselt number increases with the increase of thermal conductivity ratio. The Biot number and thermal conductivity ratio are found to have substantial effects on the role of temperature jump coefficient in controlling the Nusselt number for models A and B. The Nusselt numbers calculated using model A change drastically with the variation of solid internal heat generation. In contrast, the Nusselt numbers obtained for model B show a weak dependency on the variation of internal heat generation. The velocity slip coefficient has no noticeable effect on the Nusselt numbers for both models. The difference between the Nusselt numbers calculated using the two models decreases with an increase of the temperature jump coefficient.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Not Available

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN]Facial image processing is becoming widespread in human-computer applications, despite its complexity. High-level processes such as face recognition or gender determination rely on low-level routines that must e ectively detect and normalize the faces that appear in the input image. In this paper, a face detection and normalization system is described. The approach taken is based on a cascade of fast, weak classi ers that together try to determine whether a frontal face is present in the image.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The temperature of the mantle and the rate of melt production are parameters which play important roles in controlling the style of crustal accretion along mid-ocean ridges. To investigate the variability in crustal accretion that develops in response to variations in mantle temperature, we have conducted a geophysical investigation of the Southeast Indian Ridge (SEIR) between the Amsterdam hotspot and the Australian-Antarctic Discordance (88 degrees E-118 degrees E). The spreading center deepens by 2100 m from west to east within the study area. Despite a uniform, intermediate spreading rate (69-75 mm yr-l), the SEIR exhibits the range in axial morphology displayed by the East Pacific Rise and the Mid-Atlantic Ridge (MAR) and usually associated with variations in spreading rate. The spreading center is characterized by an axial high west of 102 degrees 45'E, whereas an axial valley is prevalent east of this longitude. Both the deepening of the ridge axis and the general evolution of axial morphology from an axial high to a rift valley are not uniform. A region of intermediate morphology separates axial highs and MAR-like rift valleys. Local transitions in axial morphology occur in three areas along the ridge axis. The increase in axial depth toward the Australian-Antarctic Discordance may be explained by the thinning of the oceanic crust by similar to 4 km and the change in axial topography. The long-wavelength changes observed along the SEIR can be attributed to a gradient in mantle temperature between regions influenced by the Amsterdam and Kerguelen hot spots and the Australian-Antarctic Discordance. However, local processes, perhaps associated with an heterogeneous mantle or along-axis asthenospheric flow, may give rise to local transitions in axial topography and depth anomalies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mass spectrometry (MS)-based proteomics has seen significant technical advances during the past two decades and mass spectrometry has become a central tool in many biosciences. Despite the popularity of MS-based methods, the handling of the systematic non-biological variation in the data remains a common problem. This biasing variation can result from several sources ranging from sample handling to differences caused by the instrumentation. Normalization is the procedure which aims to account for this biasing variation and make samples comparable. Many normalization methods commonly used in proteomics have been adapted from the DNA-microarray world. Studies comparing normalization methods with proteomics data sets using some variability measures exist. However, a more thorough comparison looking at the quantitative and qualitative differences of the performance of the different normalization methods and at their ability in preserving the true differential expression signal of proteins, is lacking. In this thesis, several popular and widely used normalization methods (the Linear regression normalization, Local regression normalization, Variance stabilizing normalization, Quantile-normalization, Median central tendency normalization and also variants of some of the forementioned methods), representing different strategies in normalization are being compared and evaluated with a benchmark spike-in proteomics data set. The normalization methods are evaluated in several ways. The performance of the normalization methods is evaluated qualitatively and quantitatively on a global scale and in pairwise comparisons of sample groups. In addition, it is investigated, whether performing the normalization globally on the whole data or pairwise for the comparison pairs examined, affects the performance of the normalization method in normalizing the data and preserving the true differential expression signal. In this thesis, both major and minor differences in the performance of the different normalization methods were found. Also, the way in which the normalization was performed (global normalization of the whole data or pairwise normalization of the comparison pair) affected the performance of some of the methods in pairwise comparisons. Differences among variants of the same methods were also observed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Statistical approaches to study extreme events require, by definition, long time series of data. In many scientific disciplines, these series are often subject to variations at different temporal scales that affect the frequency and intensity of their extremes. Therefore, the assumption of stationarity is violated and alternative methods to conventional stationary extreme value analysis (EVA) must be adopted. Using the example of environmental variables subject to climate change, in this study we introduce the transformed-stationary (TS) methodology for non-stationary EVA. This approach consists of (i) transforming a non-stationary time series into a stationary one, to which the stationary EVA theory can be applied, and (ii) reverse transforming the result into a non-stationary extreme value distribution. As a transformation, we propose and discuss a simple time-varying normalization of the signal and show that it enables a comprehensive formulation of non-stationary generalized extreme value (GEV) and generalized Pareto distribution (GPD) models with a constant shape parameter. A validation of the methodology is carried out on time series of significant wave height, residual water level, and river discharge, which show varying degrees of long-term and seasonal variability. The results from the proposed approach are comparable with the results from (a) a stationary EVA on quasi-stationary slices of non-stationary series and (b) the established method for non-stationary EVA. However, the proposed technique comes with advantages in both cases. For example, in contrast to (a), the proposed technique uses the whole time horizon of the series for the estimation of the extremes, allowing for a more accurate estimation of large return levels. Furthermore, with respect to (b), it decouples the detection of non-stationary patterns from the fitting of the extreme value distribution. As a result, the steps of the analysis are simplified and intermediate diagnostics are possible. In particular, the transformation can be carried out by means of simple statistical techniques such as low-pass filters based on the running mean and the standard deviation, and the fitting procedure is a stationary one with a few degrees of freedom and is easy to implement and control. An open-source MAT-LAB toolbox has been developed to cover this methodology, which is available at https://github.com/menta78/tsEva/(Mentaschi et al., 2016).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Statistical analysis of DNA microarray data provides a valuable diagnostic tool for the investigation of genetic components of diseases. To take advantage of the multitude of available data sets and analysis methods, it is desirable to combine both different algorithms and data from different studies. Applying ensemble learning, consensus clustering and cross-study normalization methods for this purpose in an almost fully automated process and linking different analysis modules together under a single interface would simplify many microarray analysis tasks. Results: We present ArrayMining.net, a web-application for microarray analysis that provides easy access to a wide choice of feature selection, clustering, prediction, gene set analysis and cross-study normalization methods. In contrast to other microarray-related web-tools, multiple algorithms and data sets for an analysis task can be combined using ensemble feature selection, ensemble prediction, consensus clustering and cross-platform data integration. By interlinking different analysis tools in a modular fashion, new exploratory routes become available, e.g. ensemble sample classification using features obtained from a gene set analysis and data from multiple studies. The analysis is further simplified by automatic parameter selection mechanisms and linkage to web tools and databases for functional annotation and literature mining. Conclusion: ArrayMining.net is a free web-application for microarray analysis combining a broad choice of algorithms based on ensemble and consensus methods, using automatic parameter selection and integration with annotation databases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study focuses the export performance of the 2004 EU enlargement economies between 1990 and 2013. The long time span analysed allows to capture different stages in the relationship of these new members with the EU before and after accession. The study is based on the Constant Market Share methodology of decomposing an ex-post country’s export performance into different effects. Two different Constant Market Share Analysis (CMSA) were selected in order to disentangle, for the exports of the new members to the EU15, (i) the growth rate of exports and (ii) the growth rate of exports relatively to the world. Both approaches are applied to manufactured products first without disaggregating results by sectors and then grouping all products into two different classification of sectors: one considering the technological intensity of manufactured exports and another evaluating the specialization factors of the products exported. Results provide information not only on the ten economies’ export performance as a group but also individually considered and on the importance of each EU15 destination market to the export performance of these countries.