937 resultados para Distributed data
Resumo:
The evolution of wireless sensor network technology has enabled us to develop advanced systems for real time monitoring. In the present scenario wireless sensor networks are increasingly being used for precision agriculture. The advantages of using wireless sensor networks in agriculture are distributed data collection and monitoring, monitor and control of climate, irrigation and nutrient supply. Hence decreasing the cost of production and increasing the efficiency of production. This paper describes the security issues related to wireless sensor networks and suggests some techniques for achieving system security. This paper also discusses a protocol that can be adopted for increasing the security of the transmitted data
Resumo:
Nowadays, Oceanographic and Geospatial communities are closely related worlds. The problem is that they follow parallel paths in data storage, distributions, modelling and data analyzing. This situation produces different data model implementations for the same features. While Geospatial information systems have 2 or 3 dimensions, the Oceanographic models uses multidimensional parameters like temperature, salinity, streams, ocean colour... This implies significant differences between data models of both communities, and leads to difficulties in dataset analysis for both sciences. These troubles affect directly to the Mediterranean Institute for Advanced Studies ( IMEDEA (CSIC-UIB)). Researchers from this Institute perform intensive processing with data from oceanographic facilities like CTDs, moorings, gliders… and geospatial data collected related to the integrated management of coastal zones. In this paper, we present an approach solution based on THREDDS (Thematic Real-time Environmental Distributed Data Services). THREDDS allows data access through the standard geospatial data protocol Web Coverage Service, inside the European project (European Coastal Sea Operational Observing and Forecasting system). The goal of ECOOP is to consolidate, integrate and further develop existing European coastal and regional seas operational observing and forecasting systems into an integrated pan- European system targeted at detecting environmental and climate changes
Resumo:
Two ongoing projects at ESSC that involve the development of new techniques for extracting information from airborne LiDAR data and combining this information with environmental models will be discussed. The first project in conjunction with Bristol University is aiming to improve 2-D river flood flow models by using remote sensing to provide distributed data for model calibration and validation. Airborne LiDAR can provide such models with a dense and accurate floodplain topography together with vegetation heights for parameterisation of model friction. The vegetation height data can be used to specify a friction factor at each node of a model’s finite element mesh. A LiDAR range image segmenter has been developed which converts a LiDAR image into separate raster maps of surface topography and vegetation height for use in the model. Satellite and airborne SAR data have been used to measure flood extent remotely in order to validate the modelled flood extent. Methods have also been developed for improving the models by decomposing the model’s finite element mesh to reflect floodplain features such as hedges and trees having different frictional properties to their surroundings. Originally developed for rural floodplains, the segmenter is currently being extended to provide DEMs and friction parameter maps for urban floods, by fusing the LiDAR data with digital map data. The second project is concerned with the extraction of tidal channel networks from LiDAR. These networks are important features of the inter-tidal zone, and play a key role in tidal propagation and in the evolution of salt-marshes and tidal flats. The study of their morphology is currently an active area of research, and a number of theories related to networks have been developed which require validation using dense and extensive observations of network forms and cross-sections. The conventional method of measuring networks is cumbersome and subjective, involving manual digitisation of aerial photographs in conjunction with field measurement of channel depths and widths for selected parts of the network. A semi-automatic technique has been developed to extract networks from LiDAR data of the inter-tidal zone. A multi-level knowledge-based approach has been implemented, whereby low level algorithms first extract channel fragments based mainly on image properties then a high level processing stage improves the network using domain knowledge. The approach adopted at low level uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels. The higher level processing includes a channel repair mechanism.
Resumo:
In clinical trials, situations often arise where more than one response from each patient is of interest; and it is required that any decision to stop the study be based upon some or all of these measures simultaneously. Theory for the design of sequential experiments with simultaneous bivariate responses is described by Jennison and Turnbull (Jennison, C., Turnbull, B. W. (1993). Group sequential tests for bivariate response: interim analyses of clinical trials with both efficacy and safety endpoints. Biometrics 49:741-752) and Cook and Farewell (Cook, R. J., Farewell, V. T. (1994). Guidelines for monitoring efficacy and toxicity responses in clinical trials. Biometrics 50:1146-1152) in the context of one efficacy and one safety response. These expositions are in terms of normally distributed data with known covariance. The methods proposed require specification of the correlation, ρ between test statistics monitored as part of the sequential test. It can be difficult to quantify ρ and previous authors have suggested simply taking the lowest plausible value, as this will guarantee power. This paper begins with an illustration of the effect that inappropriate specification of ρ can have on the preservation of trial error rates. It is shown that both the type I error and the power can be adversely affected. As a possible solution to this problem, formulas are provided for the calculation of correlation from data collected as part of the trial. An adaptive approach is proposed and evaluated that makes use of these formulas and an example is provided to illustrate the method. Attention is restricted to the bivariate case for ease of computation, although the formulas derived are applicable in the general multivariate case.
Resumo:
This article reviews current technological developments, particularly Peer-to-Peer technologies and Distributed Data Systems, and their value to community memory projects, particularly those concerned with the preservation of the cultural, literary and administrative data of cultures which have suffered genocide or are at risk of genocide. It draws attention to the comparatively good representation online of genocide denial groups and changes in the technological strategies of holocaust denial and other far-right groups. It draws on the author's work in providing IT support for a UK-based Non-Governmental Organization providing support for survivors of genocide in Rwanda.
Resumo:
The Distributed Rule Induction (DRI) project at the University of Portsmouth is concerned with distributed data mining algorithms for automatically generating rules of all kinds. In this paper we present a system architecture and its implementation for inducing modular classification rules in parallel in a local area network using a distributed blackboard system. We present initial results of a prototype implementation based on the Prism algorithm.
Effects of probiotic bacteria on Candida presence and IgA anti-Candida in the oral cavity of elderly
Resumo:
Imbalance in the resident microbiota may promote the growth of opportunistic microorganisms, such as yeasts of Candida genus and the development of diseases, especially in aged people. This study evaluated whether the consumption of the probiotic Yakult LB® (Lactobacillus casei and Bifidobacterium breve) was able to influence on the specific immunological response against Candida and on the presence of these yeasts in the oral cavity of 42 healthy aged individuals. Saliva samples were collected before and after the probiotic use for 30 days, 3 times a week. The samples were plated in Dextrose Saboraud Agar with chloramphenicol, the colony-forming units (CFU/mL) were counted and the Candida species were identified. Anti-Candida IgA analysis was conducted using the ELISA technique. ANOVA and Student's t-test were used for normally distributed data and the Wilcoxon test was used for data with non-normal distribution (α=0.05). The results showed a statistically significant reduction (p<0.05) in Candida prevalence (from 92.9% to 85.7%), in CFU/mL counts of Candida and in the number of non-albicans species after consumption of the probiotic. Immunological analysis demonstrated a significant increase (p<0.05) in anti-Candida IgA levels. In conclusion, probiotic bacteria reduced Candida numbers in the oral cavity of the elderly and increased specific secretory immune response against these yeasts, suggesting its possible use in controlling oral candidosis.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Introduction: the improvements on the health area increased the brazilians life expectative. Because of it, more people becomes elder, passing through various common processes of aging, as the balance decrease. Resulting form this the risk of fall increase, and this has a negative impact on the quality of life. As more people become elder the institutionalization tax increase. Objectives: compare the balance and quality of life between institutionalized and non-institutionalized elders; correlate the Berg Balance Scale (BBS) with the Timed Up and Go test (TUG) and with the questionnaire “The Medical Outcome Study 36 – Item Short-Form Health Survey” (SF-36). Methods: were evaluated 20 elders, ten institutionalized (GI) and ten non-institutionalized (GNI). To the balance assessment were used the BBS and the TUG, the quality of life was evaluated using the SF-36. The signifi cance level was set to 5% (p<0,05). The GraphPad Prism 5# was used to analyze the data. To identify the distribution of the data was applied the Shapiro-Wilk test. In the comparison between groups, the normal distributed data were analyzed with the Unpaired Student t test. The non-normal distributed data were analyzed with the Mann-Whitney non-parametric test. The correlations were analyzed with the Pearson (normal data) and Spearman’s (non-normal data) tests. Results: the age average for each group was 72,8±8,36 years (GI) e 67,4±3,53 years (GNI). The GNI had a better performance than the GI in the BBS (*p=0,0017) as in the TUG (*p<0,0002). There wasn’t difference between the quality of life. There was correlation between EEB and TUG (-0,8907 for the GI and -0,7180 for the GNI) and between EEB and the functional capacity domain from the SF-36 (0,7657). Conclusion: the non-institutionalized elders presented best balance. It was found good correlation between TUG and BBS. In the studied sample, to be institutionalized didn’t infl uenced the quality of life.
Resumo:
There are some variants of the widely used Fuzzy C-Means (FCM) algorithm that support clustering data distributed across different sites. Those methods have been studied under different names, like collaborative and parallel fuzzy clustering. In this study, we offer some augmentation of the two FCM-based clustering algorithms used to cluster distributed data by arriving at some constructive ways of determining essential parameters of the algorithms (including the number of clusters) and forming a set of systematically structured guidelines such as a selection of the specific algorithm depending on the nature of the data environment and the assumptions being made about the number of clusters. A thorough complexity analysis, including space, time, and communication aspects, is reported. A series of detailed numeric experiments is used to illustrate the main ideas discussed in the study.
Resumo:
Semantic Web technologies are strategic in order to fulfill the openness requirement of Self-Aware Pervasive Service Ecosystems. In fact they provide agents with the ability to cope with distributed data, using RDF to represent information, ontologies to describe relations between concepts from any domain (e.g. equivalence, specialization/extension, and so on) and reasoners to extract implicit knowledge. The aim of this thesis is to study these technologies and design an extension of a pervasive service ecosystems middleware capable of exploiting semantic power, and deepening performance implications.
Resumo:
The present study concerns the acoustical characterisation of Italian historical theatres. It moved from the ISO 3382 which provides the guidelines for the measurement of a well established set of room acoustic parameters inside performance spaces. Nevertheless, the peculiarity of Italian historical theatres needs a more specific approach. The Charter of Ferrara goes in this direction, aiming at qualifying the sound field in this kind of halls and the present work pursues the way forward. Trying to understand how the acoustical qualification should be done, the Bonci Theatre in Cesena has been taken as a case study. In September 2012 acoustical measurements were carried out in the theatre, recording monaural e binaural impulse responses at each seat in the hall. The values of the time criteria, energy criteria and psycho-acoustical and spatial criteria have been extracted according to ISO 3382. Statistics were performed and a 3D model of the theatre was realised and tuned. Statistical investigations were carried out on the whole set of measurement positions and on carefully chosen reduced subsets; it turned out that these subsets are representative only of the “average” acoustics of the hall. Normality tests were carried out to verify whether EDT, T30 and C80 could be described with some degree of reliability with a theoretical distribution. Different results, according to the varying assumptions underlying each test, were found. Finally, an attempt was made to correlate the numerical results emerged from the statistical analysis to the perceptual sphere. Looking for “acoustical equivalent areas”, relative difference limens were considered as threshold values. No rule of thumb emerged. Finally, the significance of the usual representation through mean values and standard deviation, which may be meaningful for normal distributed data, was investigated.
Resumo:
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Resumo:
The use of group-randomized trials is particularly widespread in the evaluation of health care, educational, and screening strategies. Group-randomized trials represent a subset of a larger class of designs often labeled nested, hierarchical, or multilevel and are characterized by the randomization of intact social units or groups, rather than individuals. The application of random effects models to group-randomized trials requires the specification of fixed and random components of the model. The underlying assumption is usually that these random components are normally distributed. This research is intended to determine if the Type I error rate and power are affected when the assumption of normality for the random component representing the group effect is violated. ^ In this study, simulated data are used to examine the Type I error rate, power, bias and mean squared error of the estimates of the fixed effect and the observed intraclass correlation coefficient (ICC) when the random component representing the group effect possess distributions with non-normal characteristics, such as heavy tails or severe skewness. The simulated data are generated with various characteristics (e.g. number of schools per condition, number of students per school, and several within school ICCs) observed in most small, school-based, group-randomized trials. The analysis is carried out using SAS PROC MIXED, Version 6.12, with random effects specified in a random statement and restricted maximum likelihood (REML) estimation specified. The results from the non-normally distributed data are compared to the results obtained from the analysis of data with similar design characteristics but normally distributed random effects. ^ The results suggest that the violation of the normality assumption for the group component by a skewed or heavy-tailed distribution does not appear to influence the estimation of the fixed effect, Type I error, and power. Negative biases were detected when estimating the sample ICC and dramatically increased in magnitude as the true ICC increased. These biases were not as pronounced when the true ICC was within the range observed in most group-randomized trials (i.e. 0.00 to 0.05). The normally distributed group effect also resulted in bias ICC estimates when the true ICC was greater than 0.05. However, this may be a result of higher correlation within the data. ^