912 resultados para Data distribution


Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE To investigate retrograde axonal degeneration for its potential to cause microcystic macular edema (MME), a maculopathy that has been previously described in patients with demyelinating disease. To identify risk factors for MME and to expand the anatomic knowledge on MME. DESIGN Retrospective case series. PARTICIPANTS We included 117 consecutive patients and 180 eyes with confirmed optic neuropathy of variable etiology. Patients with glaucoma were excluded. METHODS We determined age, sex, visual acuity, etiology of optic neuropathy, and the temporal and spatial characteristics of MME. Eyes with MME were compared with eyes with optic neuropathy alone and to healthy fellow eyes. With retinal layer segmentation we quantitatively measured the intraretinal anatomy. MAIN OUTCOME MEASURES Demographic data, distribution of MME in the retina, and thickness of retinal layers were analyzed. RESULTS We found MME in 16 eyes (8.8%) from 9 patients, none of whom had multiple sclerosis or neuromyelitis optica. The MME was restricted to the inner nuclear layer (INL) and had a characteristic perifoveal circular distribution. Compared with healthy controls, MME was associated with significant thinning of the ganglion cell layer and nerve fiber layer, as well as a thickening of the INL and the deeper retinal layers. Youth is a significant risk factor for MME. CONCLUSIONS Microcystic macular edema is not specific for demyelinating disease. It is a sign of optic neuropathy irrespective of its etiology. The distinctive intraretinal anatomy suggests that MME is caused by retrograde degeneration of the inner retinal layers, resulting in impaired fluid resorption in the macula.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A variety of occupational hazards are indigenous to academic and research institutions, ranging from traditional life safety concerns, such as fire safety and fall protection, to specialized occupational hygiene issues such as exposure to carcinogenic chemicals, radiation sources, and infectious microorganisms. Institutional health and safety programs are constantly challenged to establish and maintain adequate protective measures for this wide array of hazards. A unique subset of academic and research institutions are classified as historically Black universities which provide educational opportunities primarily to minority populations. State funded minority schools receive less resources than their non-minority counterparts, resulting in a reduced ability to provide certain programs and services. Comprehensive health and safety services for these institutions may be one of the services compromised, resulting in uncontrolled exposures to various workplace hazards. Such a result would also be contrary to the national health status objectives to improve preventive health care measures for minority populations.^ To determine if differences exist, a cross-sectional survey was performed to evaluate the relative status of health and safety programs present within minority and non-minority state-funded academic and research institutions. Data were obtained from direct mail questionnaires, supplemented by data from publicly available sources. Parameters for comparison included reported numbers of full and part-time health and safety staff, reported OSHA 200 log (or equivalent) values, and reported workers compensation experience modifiers. The relative impact of institutional minority status, institution size, and OSHA regulatory environment, was also assessed. Additional health and safety program descriptors were solicited in an attempt to develop a preliminary profile of the hazards present in this unique work setting.^ Survey forms were distributed to 24 minority and 51 non-minority institutions. A total of 72% of the questionnaires were returned, with 58% of the minority and 78% of the non-minority institutions participating. The mean number of reported full-time health and safety staff for the responding minority institutions was determined to be 1.14, compared to 3.12 for the responding non-minority institutions. Data distribution variances were stabilized using log-normal transformations, and although subsequent analysis indicated statistically significant differences, the differences were found to be predicted by institution size only, and not by minority status or OSHA regulatory environment. Similar results were noted for estimated full-time equivalent health and safety staffing levels. Significant differences were not noted between reported OSHA 200 log (or equivalent) data, and a lack of information provided on workers compensation experience modifiers prevented comparisons on insurance premium expenditures. Other health and safety program descriptive information obtained served to validate the study's presupposition that the inclusion criteria would encompass those organizations with occupational risks from all four major hazard categories. Worker medical surveillance programs appeared to exist at most institutions, but the specific tests completed were not readily identifiable.^ The results of this study serve as a preliminary description of the health and safety programs for a unique set of workplaces have not been previously investigated. Numerous opportunities for further research are noted, including efforts to quantify the relative amount of each hazard present, the further definition of the programs reported to be in place, determination of other means to measure health outcomes on campuses, and comparisons among other culturally diverse workplaces. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Postcruise X-ray diffraction (XRD) data for 95 whole-rock samples from Holes 1188A, 1188F, 1189A, and 1189B are presented. The samples represent alteration types recovered during Leg 193. The data set is incorporated into the shipboard XRD data set. Based on the newly obtained XRD data, distribution of alteration phases were redrawn for Ocean Drilling Program Sites 1188 and 1189.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Self-OrganizingMap (SOM) is a neural network model that performs an ordered projection of a high dimensional input space in a low-dimensional topological structure. The process in which such mapping is formed is defined by the SOM algorithm, which is a competitive, unsupervised and nonparametric method, since it does not make any assumption about the input data distribution. The feature maps provided by this algorithm have been successfully applied for vector quantization, clustering and high dimensional data visualization processes. However, the initialization of the network topology and the selection of the SOM training parameters are two difficult tasks caused by the unknown distribution of the input signals. A misconfiguration of these parameters can generate a feature map of low-quality, so it is necessary to have some measure of the degree of adaptation of the SOM network to the input data model. The topologypreservation is the most common concept used to implement this measure. Several qualitative and quantitative methods have been proposed for measuring the degree of SOM topologypreservation, particularly using Kohonen's model. In this work, two methods for measuring the topologypreservation of the Growing Cell Structures (GCSs) model are proposed: the topographic function and the topology preserving map

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Natural regeneration in stone pine (Pinus pinea L.) managed forests in the Spanish Northern Plateau is not achieved successfully under current silviculture practices, constituting a main concern for forest managers. We modelled spatio-temporal features of primary dispersal to test whether (a) present low stand densities constrain natural regeneration success and (b) seed release is a climate-controlled process. The present study is based on data collected from a 6 years seed trap experiment considering different regeneration felling intensities. From a spatial perspective, we attempted alternate established kernels under different data distribution assumptions to fit a spatial model able to predict P. pinea seed rain. Due to P. pinea umbrella-like crown, models were adapted to account for crown effect through correction of distances between potential seed arrival locations and seed sources. In addition, individual tree fecundity was assessed independently from existing models, improving parameter estimation stability. Seed rain simulation enabled to calculate seed dispersal indexes for diverse silvicultural regeneration treatments. The selected spatial model of best fit (Weibull, Poisson assumption) predicted a highly clumped dispersal pattern that resulted in a proportion of gaps where no seed arrival is expected (dispersal limitation) between 0.25 and 0.30 for intermediate intensity regeneration fellings and over 0.50 for intense fellings. To describe the temporal pattern, the proportion of seeds released during monthly intervals was modelled as a function of climate variables – rainfall events – through a linear model that considered temporal autocorrelation, whereas cone opening took place over a temperature threshold. Our findings suggest the application of less intensive regeneration fellings, to be carried out after years of successful seedling establishment and, seasonally, subsequent to the main rainfall period (late fall). This schedule would avoid dispersal limitation and would allow for a complete seed release. These modifications in present silviculture practices would produce a more efficient seed shadow in managed stands.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El objetivo final de las investigaciones recogidas en esta tesis doctoral es la estimación del volumen de hielo total de los ms de 1600 glaciares de Svalbard, en el Ártico, y, con ello, su contribución potencial a la subida del nivel medio del mar en un escenario de calentamiento global. Los cálculos más exactos del volumen de un glaciar se efectúan a partir de medidas del espesor de hielo obtenidas con georradar. Sin embargo, estas medidas no son viables para conjuntos grandes de glaciares, debido al coste, dificultades logísticas y tiempo requerido por ellas, especialmente en las regiones polares o de montaña. Frente a ello, la determinación de áreas de glaciares a partir de imágenes de satélite sí es viable a escalas global y regional, por lo que las relaciones de escala volumen-área constituyen el mecanismo más adecuado para las estimaciones de volúmenes globales y regionales, como las realizadas para Svalbard en esta tesis. Como parte del trabajo de tesis, hemos elaborado un inventario de los glaciares de Svalbard en los que se han efectuado radioecosondeos, y hemos realizado los cálculos del volumen de hielo de más de 80 cuencas glaciares de Svalbard a partir de datos de georradar. Estos volúmenes han sido utilizados para calibrar las relaciones volumen-área desarrolladas en la tesis. Los datos de georradar han sido obtenidos en diversas campañas llevadas a cabo por grupos de investigación internacionales, gran parte de ellas lideradas por el Grupo de Simulación Numérica en Ciencias e Ingeniería de la Universidad Politécnica de Madrid, del que forman parte la doctoranda y los directores de tesis. Además, se ha desarrollado una metodología para la estimación del error en el cálculo de volumen, que aporta una novedosa técnica de cálculo del error de interpolación para conjuntos de datos del tipo de los obtenidos con perfiles de georradar, que presentan distribuciones espaciales con unos patrones muy característicos pero con una densidad de datos muy irregular. Hemos obtenido en este trabajo de tesis relaciones de escala específicas para los glaciares de Svalbard, explorando la sensibilidad de los parámetros a diferentes morfologías glaciares, e incorporando nuevas variables. En particular, hemos efectuado experimentos orientados a verificar si las relaciones de escala obtenidas caracterizando los glaciares individuales por su tamaño, pendiente o forma implican diferencias significativas en el volumen total estimado para los glaciares de Svalbard, y si esta partición implica algún patrón significativo en los parámetros de las relaciones de escala. Nuestros resultados indican que, para un valor constante del factor multiplicativo de la relacin de escala, el exponente que afecta al área en la relación volumen-área decrece según aumentan la pendiente y el factor de forma, mientras que las clasificaciones basadas en tamaño no muestran un patrón significativo. Esto significa que los glaciares con mayores pendientes y de tipo circo son menos sensibles a los cambios de área. Además, los volúmenes de la población total de los glaciares de Svalbard calculados con fraccionamiento en grupos por tamaño y pendiente son un 1-4% menores que los obtenidas usando la totalidad de glaciares sin fraccionamiento en grupos, mientras que los volúmenes calculados fraccionando por forma son un 3-5% mayores. También realizamos experimentos multivariable para obtener estimaciones óptimas del volumen total mediante una combinación de distintos predictores. Nuestros resultados muestran que un modelo potencial simple volumen-área explica el 98.6% de la varianza. Sólo el predictor longitud del glaciar proporciona significación estadística cuando se usa además del área del glaciar, aunque el coeficiente de determinación disminuye en comparación con el modelo más simple V-A. El predictor intervalo de altitud no proporciona información adicional cuando se usa además del área del glaciar. Nuestras estimaciones del volumen de la totalidad de glaciares de Svalbard usando las diferentes relaciones de escala obtenidas en esta tesis oscilan entre 6890 y 8106 km3, con errores relativos del orden de 6.6-8.1%. El valor medio de nuestras estimaciones, que puede ser considerado como nuestra mejor estimación del volumen, es de 7.504 km3. En términos de equivalente en nivel del mar (SLE), nuestras estimaciones corresponden a una subida potencial del nivel del mar de 17-20 mm SLE, promediando 19_2 mm SLE, donde el error corresponde al error en volumen antes indicado. En comparación, las estimaciones usando las relaciones V-A de otros autores son de 13-26 mm SLE, promediando 20 _ 2 mm SLE, donde el error representa la desviación estándar de las distintas estimaciones. ABSTRACT The final aim of the research involved in this doctoral thesis is the estimation of the total ice volume of the more than 1600 glaciers of Svalbard, in the Arctic region, and thus their potential contribution to sea-level rise under a global warming scenario. The most accurate calculations of glacier volumes are those based on ice-thicknesses measured by groundpenetrating radar (GPR). However, such measurements are not viable for very large sets of glaciers, due to their cost, logistic difficulties and time requirements, especially in polar or mountain regions. On the contrary, the calculation of glacier areas from satellite images is perfectly viable at global and regional scales, so the volume-area scaling relationships are the most useful tool to determine glacier volumes at global and regional scales, as done for Svalbard in this PhD thesis. As part of the PhD work, we have compiled an inventory of the radio-echo sounded glaciers in Svalbard, and we have performed the volume calculations for more than 80 glacier basins in Svalbard from GPR data. These volumes have been used to calibrate the volume-area relationships derived in this dissertation. Such GPR data have been obtained during fieldwork campaigns carried out by international teams, often lead by the Group of Numerical Simulation in Science and Engineering of the Technical University of Madrid, to which the PhD candidate and her supervisors belong. Furthermore, we have developed a methodology to estimate the error in the volume calculation, which includes a novel technique to calculate the interpolation error for data sets of the type produced by GPR profiling, which show very characteristic data distribution patterns but with very irregular data density. We have derived in this dissertation scaling relationships specific for Svalbard glaciers, exploring the sensitivity of the scaling parameters to different glacier morphologies and adding new variables. In particular, we did experiments aimed to verify whether scaling relationships obtained through characterization of individual glacier shape, slope and size imply significant differences in the estimated volume of the total population of Svalbard glaciers, and whether this partitioning implies any noticeable pattern in the scaling relationship parameters. Our results indicate that, for a fixed value of the factor in the scaling relationship, the exponent of the area in the volume-area relationship decreases as slope and shape increase, whereas size-based classifications do not reveal any clear trend. This means that steep slopes and cirque-type glaciers are less sensitive to changes in glacier area. Moreover, the volumes of the total population of Svalbard glaciers calculated according to partitioning in subgroups by size and slope are smaller (by 1-4%) than that obtained considering all glaciers without partitioning into subgroups, whereas the volumes calculated according to partitioning in subgroups by shape are 3-5% larger. We also did multivariate experiments attempting to optimally predict the volume of Svalbard glaciers from a combination of different predictors. Our results show that a simple power-type V-A model explains 98.6% of the variance. Only the predictor glacier length provides statistical significance when used in addition to the predictor glacier area, though the coefficient of determination decreases as compared with the simpler V-A model. The predictor elevation range did not provide any additional information when used in addition to glacier area. Our estimates of the volume of the entire population of Svalbard glaciers using the different scaling relationships that we have derived along this thesis range within 6890-8106 km3, with estimated relative errors in total volume of the order of 6.6-8.1% The average value of all of our estimates, which could be used as a best estimate for the volume, is 7,504 km3. In terms of sea-level equivalent (SLE), our volume estimates correspond to a potential contribution to sea-level rise within 17-20 mm SLE, averaging 19 _ 2 mm SLE, where the quoted error corresponds to our estimated relative error in volume. For comparison, the estimates using the V-A scaling relations found in the literature range within 13-26 mm SLE, averaging 20 _ 2 mm SLE, where the quoted error represents the standard deviation of the different estimates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A estimulação elétrica neuromuscular (EENM) é uma recente técnica terapêutica no tratamento das disfagias orofaríngeas. Poucos estudos utilizaram a EENM em casos oncológicos, havendo muitas dúvidas sobre o método de aplicação e os resultados de diferentes condições de estimulação nessa população. Este trabalho teve por objetivo verificar o efeito imediato da EENM sensorial e motora, nas fases oral e faríngea da deglutição, em pacientes após tratamento do câncer de cabeça e pescoço. Para isso foi realizado um estudo transversal intervencional que incluiu 11 pacientes adultos e idosos (mediana de 59 anos) acometidos por câncer de cabeça e pescoço. Todos os indivíduos foram submetidos ao exame de videofluoroscopia da deglutição, no qual, de modo randomizado, foram solicitadas deglutições de 5 ml de alimentos nas consistências líquida, mel e pudim em três condições distintas: sem estimulação, com EENM sensorial, com EENM motora. Foi classificado o grau da disfunção da deglutição por meio da escala DOSS (Dysphagia Outcome and Severity Scale), a presença de estase de alimentos (escala de Eisenhuber), de penetração laríngea, aspiração laringotraqueal (Penetration and Aspiration Scale - PAS), além da medida do tempo de trânsito oral e faríngeo (em segundos). Para a comparação dos resultados, considerando os três estímulos aplicados, na escala de resíduos, na escala de penetração aspiração, na escala DOSS e no tempo de trânsito oral e faríngeo foi aplicado o teste de Friedman ou a análise de variância para medidas repetidas (de acordo com a distribuição dos dados). Para todos os testes foi adotado nível de significância de 5%. Os resultados demonstraram que houve melhora com a estimulação sensorial e motora na escala DOSS e na escala PAS para um paciente tratado de câncer de boca e outro de laringe e piora, em ambas as escalas, para dois pacientes (câncer de boca), sendo um para a estimulação motora e outro na sensorial. A aplicação da escala de Eisenhuber permitiu verificar que a EENM, tanto em nível sensorial como motor, modificou de forma variável a presença de resíduos para os casos de câncer de boca, enquanto para o paciente com câncer de laringe houve redução de resíduos em valécula/raiz da língua para a estimulação sensorial e motora, além de aumento de resíduos em parede posterior da faringe com o estímulo motor. Além disso, não foi encontrada diferença estatisticamente significante para o tempo de trânsito oral e faríngeo nas diferentes estimulações para todas as consistências testadas (p>0,05). Diante desses achados, concluiu-se que a EENM, em nível sensorial e motor, apresentou variável impacto imediato nas fases oral e faríngea da deglutição, podendo melhorar a função de deglutição de pacientes com significante disfagia após o tratamento para o câncer de cabeça e pescoço, no que se diz respeito ao grau da disfagia e à presença de penetração e aspiração.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, parallel Relaxed and Extrapolated algorithms based on the Power method for accelerating the PageRank computation are presented. Different parallel implementations of the Power method and the proposed variants are analyzed using different data distribution strategies. The reported experiments show the behavior and effectiveness of the designed algorithms for realistic test data using either OpenMP, MPI or an hybrid OpenMP/MPI approach to exploit the benefits of shared memory inside the nodes of current SMP supercomputers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Virtual learning environments (VLEs) have witnessed a high evolution, namely regarding their potentialities, the tools and the activities they provide. VLEs enable us to access large quantities of data resulting from both students and teachers’ activities developed in those environments. Monitoring undergraduates’ activities in VLEs is important as it allows us to showcase, in a structured way, a number of indicators which may be taken into account to understand the learning process more deeply and to propose improvements in the teaching and learning strategies as well as in the institution’s virtual environment. Although VLEs provide several data sectorial statistics, they do not provide knowledge regarding the institution’s evolution. Therefore, we consider the analysis of the activity logs in VLEs over a period of five years to be paramount. This paper focuses on the analysis of the activities developed by students in a virtual learning environment, from a sample of undergraduate students, approximately 7000 per year, over a period of five academic years, namely from 2009/2010 to 2013/2014. The main aims of this research work are to assess the evolution of activity logs in the virtual learning environment of a Portuguese public higher education institution, in order to fill possible gaps and to hold out the prospect of new forms of use of the environment. The results obtained from the data analysis show that overall, the number of accesses to the virtual learning environment increased over the five years under study. The most used tools were Resources, Messages and Assignments. The most frequent activities developed with these tools were respectively consulting information, sending messages and submitting assignments. The frequency of accesses to the virtual learning environment was characterized according to the number of accesses in the activity log. The data distribution was divided into five frequency categories named very low, low, moderate, high and very high, determined by the percentiles 20, 40, 60, 80 and 100, respectively. The study of activity logs of virtual learning environments is important not only because they provide real knowledge of the use that undergraduates make of these environments, but also because of the possibilities they create regarding the identification of a need for new pedagogical approaches or a reinforcement of previously consolidated approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Long-term changes in the beach fauna at Duck, North Carolina, were investigated. Twenty-one stations located on three transects on the oceanside and twenty-four stations located on three transects on the sound side were sampled seasonally from November 1980 to July 1981. The data collected in this study were compared to a previous study conducted in 1976 (Matta, 1977) to investigate the potential effects of the construction of the CERC Field Research Facility pier on the adjacent beaches. No effects on the benthic fauna were found. Changes observed in the benthic macrofauna on the ocean beaches were well within the range attributable to the natural variation of an open coast system. The ocean beach macrofauna was observed to form a single community migrating on an off the beach with the seasons. On the sound beaches, changes were detected in the benthic macrofauna; however, these were attributed to a salinity increase during the 1981 sampling year. (Author).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Spatial data mining recently emerges from a number of real applications, such as real-estate marketing, urban planning, weather forecasting, medical image analysis, road traffic accident analysis, etc. It demands for efficient solutions for many new, expensive, and complicated problems. In this paper, we investigate the problem of evaluating the top k distinguished “features” for a “cluster” based on weighted proximity relationships between the cluster and features. We measure proximity in an average fashion to address possible nonuniform data distribution in a cluster. Combining a standard multi-step paradigm with new lower and upper proximity bounds, we presented an efficient algorithm to solve the problem. The algorithm is implemented in several different modes. Our experiment results not only give a comparison among them but also illustrate the efficiency of the algorithm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.

A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.

The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.

From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.

Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Secure Access For Everyone (SAFE), is an integrated system for managing trust

using a logic-based declarative language. Logical trust systems authorize each

request by constructing a proof from a context---a set of authenticated logic

statements representing credentials and policies issued by various principals

in a networked system. A key barrier to practical use of logical trust systems

is the problem of managing proof contexts: identifying, validating, and

assembling the credentials and policies that are relevant to each trust

decision.

SAFE addresses this challenge by (i) proposing a distributed authenticated data

repository for storing the credentials and policies; (ii) introducing a

programmable credential discovery and assembly layer that generates the

appropriate tailored context for a given request. The authenticated data

repository is built upon a scalable key-value store with its contents named by

secure identifiers and certified by the issuing principal. The SAFE language

provides scripting primitives to generate and organize logic sets representing

credentials and policies, materialize the logic sets as certificates, and link

them to reflect delegation patterns in the application. The authorizer fetches

the logic sets on demand, then validates and caches them locally for further

use. Upon each request, the authorizer constructs the tailored proof context

and provides it to the SAFE inference for certified validation.

Delegation-driven credential linking with certified data distribution provides

flexible and dynamic policy control enabling security and trust infrastructure

to be agile, while addressing the perennial problems related to today's

certificate infrastructure: automated credential discovery, scalable

revocation, and issuing credentials without relying on centralized authority.

We envision SAFE as a new foundation for building secure network systems. We

used SAFE to build secure services based on case studies drawn from practice:

(i) a secure name service resolver similar to DNS that resolves a name across

multi-domain federated systems; (ii) a secure proxy shim to delegate access

control decisions in a key-value store; (iii) an authorization module for a

networked infrastructure-as-a-service system with a federated trust structure

(NSF GENI initiative); and (iv) a secure cooperative data analytics service

that adheres to individual secrecy constraints while disclosing the data. We

present empirical evaluation based on these case studies and demonstrate that

SAFE supports a wide range of applications with low overhead.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An array of Bio-Argo floats equipped with radiometric sensors has been recently deployed in various open ocean areas representative of the diversity of trophic and bio-optical conditions prevailing in the so-called Case 1 waters. Around solar noon and almost everyday, each float acquires 0-250 m vertical profiles of Photosynthetically Available Radiation and downward irradiance at three wavelengths (380, 412 and 490 nm). Up until now, more than 6500 profiles for each radiometric channel have been acquired. As these radiometric data are collected out of operator’s control and regardless of meteorological conditions, specific and automatic data processing protocols have to be developed. Here, we present a data quality-control procedure aimed at verifying profile shapes and providing near real-time data distribution. This procedure is specifically developed to: 1) identify main issues of measurements (i.e. dark signal, atmospheric clouds, spikes and wave-focusing occurrences); 2) validate the final data with a hierarchy of tests to ensure a scientific utilization. The procedure, adapted to each of the four radiometric channels, is designed to flag each profile in a way compliant with the data management procedure used by the Argo program. Main perturbations in the light field are identified by the new protocols with good performances over the whole dataset. This highlights its potential applicability at the global scale. Finally, the comparison with modeled surface irradiances allows assessing the accuracy of quality-controlled measured irradiance values and identifying any possible evolution over the float lifetime due to biofouling and instrumental drift.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An array of Bio-Argo floats equipped with radiometric sensors has been recently deployed in various open ocean areas representative of the diversity of trophic and bio-optical conditions prevailing in the so-called Case 1 waters. Around solar noon and almost everyday, each float acquires 0-250 m vertical profiles of Photosynthetically Available Radiation and downward irradiance at three wavelengths (380, 412 and 490 nm). Up until now, more than 6500 profiles for each radiometric channel have been acquired. As these radiometric data are collected out of operator’s control and regardless of meteorological conditions, specific and automatic data processing protocols have to be developed. Here, we present a data quality-control procedure aimed at verifying profile shapes and providing near real-time data distribution. This procedure is specifically developed to: 1) identify main issues of measurements (i.e. dark signal, atmospheric clouds, spikes and wave-focusing occurrences); 2) validate the final data with a hierarchy of tests to ensure a scientific utilization. The procedure, adapted to each of the four radiometric channels, is designed to flag each profile in a way compliant with the data management procedure used by the Argo program. Main perturbations in the light field are identified by the new protocols with good performances over the whole dataset. This highlights its potential applicability at the global scale. Finally, the comparison with modeled surface irradiances allows assessing the accuracy of quality-controlled measured irradiance values and identifying any possible evolution over the float lifetime due to biofouling and instrumental drift.