901 resultados para Data dissemination and sharing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The modern computer systems that are in use nowadays are mostly processor-dominant, which means that their memory is treated as a slave element that has one major task – to serve execution units data requirements. This organization is based on the classical Von Neumann's computer model, proposed seven decades ago in the 1950ties. This model suffers from a substantial processor-memory bottleneck, because of the huge disparity between the processor and memory working speeds. In order to solve this problem, in this paper we propose a novel architecture and organization of processors and computers that attempts to provide stronger match between the processing and memory elements in the system. The proposed model utilizes a memory-centric architecture, wherein the execution hardware is added to the memory code blocks, allowing them to perform instructions scheduling and execution, management of data requests and responses, and direct communication with the data memory blocks without using registers. This organization allows concurrent execution of all threads, processes or program segments that fit in the memory at a given time. Therefore, in this paper we describe several possibilities for organizing the proposed memory-centric system with multiple data and logicmemory merged blocks, by utilizing a high-speed interconnection switching network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper surveys the recent literature on convergence across countries and regions. I discuss the main convergence and divergence mechanisms identified in the literature and develop a simple model that illustrates their implications for income dynamics. I then review the existing empirical evidence and discuss its theoretical implications. Early optimism concerning the ability of a human capital-augmented neoclassical model to explain productivity differences across economies has been questioned on the basis of more recent contributions that make use of panel data techniques and obtain theoretically implausible results. Some recent research in this area tries to reconcile these findings with sensible theoretical models by exploring the role of alternative convergence mechanisms and the possible shortcomings of panel data techniques for convergence analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To make full use of research data, the bioscience community needs to adopt technologies and reward mechanisms that support interoperability and promote the growth of an open 'data commoning' culture. Here we describe the prerequisites for data commoning and present an established and growing ecosystem of solutions using the shared 'Investigation-Study-Assay' framework to support that vision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The contribution of secretory immunoglobulin A (SIgA) antibodies in the defense of mucosal epithelia plays an important role in preventing pathogen adhesion to host cells, therefore blocking dissemination and further infection. This mechanism, referred to as immune exclusion, represents the dominant mode of action of the antibody. However, SIgA antibodies combine multiple facets, which together confer properties extending from intracellular and serosal neutralization of antigens, activation of non-inflammatory pathways and homeostatic control of the endogenous microbiota. The sum of these features suggests that future opportunities for translational application from research-based knowledge to clinics include the mucosal delivery of bioactive antibodies capable of preserving immunoreactivity in the lung, gastrointestinal tract, the genito-urinary tract for the treatment of infections. This article covers topics dealing with the structure of SIgA, the dissection of its mode of action in epithelia lining different mucosal surfaces and its potential in immunotherapy against infectious pathogens.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent theoretical developments and case study evidence suggests a relationship between the military in politics and corruption. This study contributes to this literature by analyzing theoretically and empirically the role of the military in politics and corruption for the first time. By drawing on a cross sectional and panel data set covering a large number of countries, over the period 1984-2007, and using a variety of econometric methods substantial empirical support is found for a positive relationship between the military in politics and corruption. In sum, our results reveal that a one standard deviation increase in the military in politics leads to a 0.22 unit increase in corruption index. This relationship is shown to be robust to a variety of specification changes, different econometric techniques, different sample sizes, alternative corruption indices and the exclusion of outliers. This study suggests that the explanatory power of the military in politics is at least as important as the conventionally accepted causes of corruption, such as economic development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose a novel empirical extension of the standard market microstructure order flow model. The main idea is that heterogeneity of beliefs in the foreign exchange market can cause model instability and such instability has not been fully accounted for in the existing empirical literature. We investigate this issue using two di¤erent data sets and focusing on out- of-sample forecasts. Forecasting power is measured using standard statistical tests and, additionally, using an alternative approach based on measuring the economic value of forecasts after building a portfolio of assets. We nd there is a substantial economic value on conditioning on the proposed models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The importance of financial market reforms in combating corruption has been highlighted in the theoretical literature but has not been systemically tested empirically. In this study we provide a first pass at testing this relationship using both linear and nonmonotonic forms of the relationship between corruption and financial intermediation. Our study finds a negative and statistically significant impact of financial intermediation on corruption. Specifically, the results imply that a one standard deviation increase in financial intermediation is associated with a decrease in corruption of 0.20 points, or 16 percent of the standard deviation in the corruption index and this relationship is shown to be robust to a variety of specification changes, including: (i) different sets of control variables; (ii) different econometrics techniques; (iii) different sample sizes; (iv) alternative corruption indices; (v) removal of outliers; (vi) different sets of panels; and (vii) allowing for cross country interdependence, contagion effects, of corruption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diffusion MRI is a well established imaging modality providing a powerful way to probe the structure of the white matter non-invasively. Despite its potential, the intrinsic long scan times of these sequences have hampered their use in clinical practice. For this reason, a large variety of methods have been recently proposed to shorten the acquisition times. Among them, spherical deconvolution approaches have gained a lot of interest for their ability to reliably recover the intra-voxel fiber configuration with a relatively small number of data samples. To overcome the intrinsic instabilities of deconvolution, these methods use regularization schemes generally based on the assumption that the fiber orientation distribution (FOD) to be recovered in each voxel is sparse. The well known Constrained Spherical Deconvolution (CSD) approach resorts to Tikhonov regularization, based on an ℓ(2)-norm prior, which promotes a weak version of sparsity. Also, in the last few years compressed sensing has been advocated to further accelerate the acquisitions and ℓ(1)-norm minimization is generally employed as a means to promote sparsity in the recovered FODs. In this paper, we provide evidence that the use of an ℓ(1)-norm prior to regularize this class of problems is somewhat inconsistent with the fact that the fiber compartments all sum up to unity. To overcome this ℓ(1) inconsistency while simultaneously exploiting sparsity more optimally than through an ℓ(2) prior, we reformulate the reconstruction problem as a constrained formulation between a data term and a sparsity prior consisting in an explicit bound on the ℓ(0)norm of the FOD, i.e. on the number of fibers. The method has been tested both on synthetic and real data. Experimental results show that the proposed ℓ(0) formulation significantly reduces modeling errors compared to the state-of-the-art ℓ(2) and ℓ(1) regularization approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this work is to evaluate the capabilities and limitations of chemometric methods and other mathematical treatments applied on spectroscopic data and more specifically on paint samples. The uniqueness of the spectroscopic data comes from the fact that they are multivariate - a few thousands variables - and highly correlated. Statistical methods are used to study and discriminate samples. A collection of 34 red paint samples was measured by Infrared and Raman spectroscopy. Data pretreatment and variable selection demonstrated that the use of Standard Normal Variate (SNV), together with removal of the noisy variables by a selection of the wavelengths from 650 to 1830 cm−1 and 2730-3600 cm−1, provided the optimal results for infrared analysis. Principal component analysis (PCA) and hierarchical clusters analysis (HCA) were then used as exploratory techniques to provide evidence of structure in the data, cluster, or detect outliers. With the FTIR spectra, the Principal Components (PCs) correspond to binder types and the presence/absence of calcium carbonate. 83% of the total variance is explained by the four first PCs. As for the Raman spectra, we observe six different clusters corresponding to the different pigment compositions when plotting the first two PCs, which account for 37% and 20% respectively of the total variance. In conclusion, the use of chemometrics for the forensic analysis of paints provides a valuable tool for objective decision-making, a reduction of the possible classification errors, and a better efficiency, having robust results with time saving data treatments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The CIPA programme is a collaborative project including two entomologists from France and seven South and Central America countries. Its objective is the development of an expert system for computer aided identification of phlebotomine sandflies from the Americas. It also includes the formation of data bases for bibliographic, taxonomic and biogeographic data. Participant consensus on taxonomic prerequisites, standardization in bibliographic data collections and selection of descriptive variables for the final programme has been established through continous communication among participants and annual meetings. The adopted check-list of American sandflies presented here includes 386 specific taxa, ordered into genera and 28 sub-genera or species groups.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. Species distribution modelling is used increasingly in both applied and theoretical research to predict how species are distributed and to understand attributes of species' environmental requirements. In species distribution modelling, various statistical methods are used that combine species occurrence data with environmental spatial data layers to predict the suitability of any site for that species. While the number of data sharing initiatives involving species' occurrences in the scientific community has increased dramatically over the past few years, various data quality and methodological concerns related to using these data for species distribution modelling have not been addressed adequately. 2. We evaluated how uncertainty in georeferences and associated locational error in occurrences influence species distribution modelling using two treatments: (1) a control treatment where models were calibrated with original, accurate data and (2) an error treatment where data were first degraded spatially to simulate locational error. To incorporate error into the coordinates, we moved each coordinate with a random number drawn from the normal distribution with a mean of zero and a standard deviation of 5 km. We evaluated the influence of error on the performance of 10 commonly used distributional modelling techniques applied to 40 species in four distinct geographical regions. 3. Locational error in occurrences reduced model performance in three of these regions; relatively accurate predictions of species distributions were possible for most species, even with degraded occurrences. Two species distribution modelling techniques, boosted regression trees and maximum entropy, were the best performing models in the face of locational errors. The results obtained with boosted regression trees were only slightly degraded by errors in location, and the results obtained with the maximum entropy approach were not affected by such errors. 4. Synthesis and applications. To use the vast array of occurrence data that exists currently for research and management relating to the geographical ranges of species, modellers need to know the influence of locational error on model quality and whether some modelling techniques are particularly robust to error. We show that certain modelling techniques are particularly robust to a moderate level of locational error and that useful predictions of species distributions can be made even when occurrence data include some error.