20 resultados para Data combination

em Deakin Research Online - Australia


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Animals which undertake migrations from foraging grounds to suitable breeding areas must adopt strategies in these new conditions in order to minimise the rate at which body condition deteriorates (which will occur due to oogenesis or provisioning for young). For some animals this involves continuing foraging, whereas for others the optimal strategy is to fast during the breeding season. The leatherback turtle undertakes long-distance migrations from temperate zones to tropical breeding areas, and in some of these areas it has been shown to exhibit diving behaviour indicative of foraging. We used conventional time–depth recorders and a single novel mouth-opening sensor to investigate the foraging behaviour of leatherback turtles in the southern Caribbean. Diving behaviour suggested attempted foraging on vertically migrating prey with significantly more diving to a more consistent depth occurring during the night. No obvious prey manipulation was detected by the mouth sensor, but rhythmic mouth opening did occur during specific phases of dives, suggesting that the turtle was relying on gustatory cues to sense its immediate environment. Patterns of diving in conjunction with these mouth-opening activities imply that leatherbacks are attempting to forage during the breeding season and that gustatory cues are important to leatherbacks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mobile computing has enabled users to seamlessly access databases even when they are on the move. Mobile computing environments require data management approaches that are able to provide complete and highly available access to shared data at any time from any where. In this paper, we propose a novel replicated data protocol for achieving such goal. The proposed scheme replicates data synchronously over stationary sites based on three dimensional grid structure while objects in mobile sites are asynchronously replicated based on commonly visited sites for each user. This combination allows the proposed protocol to operate with less than full connectivity, to easily adapt to changes in group membership and not require all sites to agree to update data objects at any given time, thus giving the technique flexibility in mobile environments. The proposed replication technique is compared with a baseline replication technique and shown to exhibit high availability, fault tolerance and minimal access times of the data and services, which are very important in an environment with low-quality communication links.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate assessment of the fate of salts, nutrients, and pollutants in natural, heterogeneous soils requires a proper quantification of both spatial and temporal solute spreading during solute movement. The number of experiments with multisampler devices that measure solute leaching as a function of space and time is increasing. The breakthrough curve (BTC) can characterize the temporal aspect of solute leaching, and recently the spatial solute distribution curve (SSDC) was introduced to describe the spatial solute distribution. We combined and extended both concepts to develop a tool for the comprehensive analysis of the full spatio-temporal behavior of solute leaching. The sampling locations are ranked in order of descending amount of total leaching (defined as the cumulative leaching from an individual compartment at the end of the experiment), thus collapsing both spatial axes of the sampling plane into one. The leaching process can then be described by a curved surface that is a function of the single spatial coordinate and time. This leaching surface is scaled to integrate to unity, and termed S can efficiently represent data from multisampler solute transport experiments or simulation results from multidimensional solute transport models. The mathematical relationships between the scaled leaching surface S, the BTC, and the SSDC are established. Any desired characteristic of the leaching process can be derived from S. The analysis was applied to a chloride leaching experiment on a lysimeter with 300 drainage compartments of 25 cm2 each. The sandy soil monolith in the lysimeter exhibited fingered flow in the water-repellent top layer. The observed S demonstrated the absence of a sharp separation between fingers and dry areas, owing to diverging flow in the wettable soil below the fingers. Times-to-peak, maximum solute fluxes, and total leaching varied more in high-leaching than in low-leaching compartments. This suggests a stochastic–convective transport process in the high-flow streamtubes, while convection–dispersion is predominant in the low-flow areas. S can be viewed as a bivariate probability density function. Its marginal distributions are the BTC of all sampling locations combined, and the SSDC of cumulative solute leaching at the end of the experiment. The observed S cannot be represented by assuming complete independence between its marginal distributions, indicating that S contains information about the leaching process that cannot be derived from the combination of the BTC and the SSDC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background:  Whether calcium supplementation can reduce osteoporotic fractures is uncertain. We did a meta-analysis to include all the randomised trials in which calcium, or calcium in combination with vitamin D, was used to prevent fracture and osteoporotic bone loss.

Methods:  We identified 29 randomised trials (n=63 897) using electronic databases, supplemented by a hand-search of reference lists, review articles, and conference abstracts. All randomised trials that recruited people aged 50 years or older were eligible. The main outcomes were fractures of all types and percentage change of bone-mineral density from baseline. Data were pooled by use of a random-effect model.

Findings:  In trials that reported fracture as an outcome (17 trials, n=52 625), treatment was associated with a 12% risk reduction in fractures of all types (risk ratio 0·88, 95% CI 0·83–0·95; p=0·0004). In trials that reported bone-mineral density as an outcome (23 trials, n=41 419), the treatment was associated with a reduced rate of bone loss of 0·54% (0·35–0·73; p<0·0001) at the hip and 1·19% (0·76–1·61%; p<0·0001) in the spine. The fracture risk reduction was significantly greater (24%) in trials in which the compliance rate was high (p<0·0001). The treatment effect was better with calcium doses of 1200 mg or more than with doses less than 1200 mg (0·80 vs 0·94; p=0·006), and with vitamin D doses of 800 IU or more than with doses less than 800 IU (0·84 vs 0·87; p=0·03).

Interpretation:  Evidence supports the use of calcium, or calcium in combination with vitamin D supplementation, in the preventive treatment of osteoporosis in people aged 50 years or older. For best therapeutic effect, we recommend minimum doses of 1200 mg of calcium, and 800 IU of vitamin D (for combined calcium plus vitamin D supplementation).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this habitat mapping study, multi-beam acoustic data are integrated with extensive, precisely geo-referenced video validation data in a GIS environment to classify benthic substrates and biota at a 33km2 site in the near shore waters of Victoria, Australia. Using an automated decision-tree classification method, 5 representative biotic groups were identified in the Cape Nelson survey area using a combination of multi-beam bathymetry, backscatter and derivative products. Rigorous error assessment of derived, classified maps produced high overall accuracies (>85%) for all mapping products. In addition, a discrete multivariate analysis technique (kappa analysis) was used to assess classification accuracy. High-resolution (2.5m cell-size) representation of sea floor morphology and textural characteristics provided by multi-beam bathymetry and backscatter datasets, allowed the interpretation of benthic substrates of the Cape Nelson site and the communities of sessile organisms that populate them. Non-parametric multivariate statistical analysis (ANOSIM) revealed a significant difference in biotic composition between depth strata, and between substrate types. Incorporated with other descriptive measures, these results indicate that depth and substrate are important factors in the distributional ecology of the biotic communities at the Cape Nelson study site. BIOENV analysis indicates that derivatives of both multi-beam datasets (bathymetry and backscatter) are correlated with distribution and density of biotic communities. Results from this study provide new tools for research and management of the coastal zone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Docetaxel (Taxotere) improve survival and prostate-specific antigen (PSA) response rates in patients with metastatic castrate-resistant prostate cancer (CRPC). We studied the combination of PI-88, an inhibitor of angiogenesis and heparanase activity, and docetaxel in chemotherapy-naive CRPC.

Patients and methods: We conducted a multicentre open-label phase I/II trial of PI-88 in combination with docetaxel. The primary end point was PSA response. Secondary end points included toxicity, radiologic response and overall survival. Doses of PI-88 were escalated to the maximum tolerated dose; whereas docetaxel was given at a fixed 75 mg/m2 dose every three weeks

Results: Twenty-one patients were enrolled in the dose-escalation component. A further 35 patients were randomly allocated to the study to evaluate the two schedules in phase II trial. The trial was stopped early by the Safety Data Review Board due to a higher-than-expected febrile neutropenia of 27%. In the pooled population, the PSA response (50% reduction) was 70%, median survival was 61 weeks (6–99 weeks) and 1-year survival was 71%.

Conclusions: The regimen of docetaxel and PI-88 is active in CRPC but associated with significant haematologic toxicity. Further evaluation of different scheduling and dosing of PI-88 and docetaxel may be warranted to optimise efficacy with a more manageable safety profile.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis and fusion of social measurements is important to understand what shapes the public’s opinion and the sustainability of the global development. However, modeling data collected from social responses is challenging as the data is typically complex and heterogeneous, which might take the form of stated facts, subjective assessment, choices, preferences or any combination thereof. Model-wise, these responses are a mixture of data types including binary, categorical, multicategorical, continuous, ordinal, count and rank data. The challenge is therefore to effectively handle mixed data in the a unified fusion framework in order to perform inference and analysis. To that end, this paper introduces eRBM (Embedded Restricted Boltzmann Machine) – a probabilistic latent variable model that can represent mixed data using a layer of hidden variables transparent across different types of data. The proposed model can comfortably support largescale data analysis tasks, including distribution modelling, data completion, prediction and visualisation. We demonstrate these versatile features on several moderate and large-scale publicly available social survey datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent investigations have determined that many Android applications in both official and non-official online markets expose details of the user's mobile phone without user consent. In this paper, for the first time in the research literature, we provide a full investigation of why such applications leak, how they leak and where the data is leaked to. In order to achieve this, we employ a combination of static and dynamic analysis based on examination of Java classes and application behaviour for a data set of 123 samples, all pre-determined as being free from malicious software. Despite the fact that anti-virus vendor software did not flag any of these samples as malware, approximately 10% of them are shown to leak data about the mobile phone to a third-party; applications from the official market appear to be just as susceptible to such leaks as applications from the non-official markets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neural network (NN) models have been widely used in the literature for short-term load forecasting. Their popularity is mainly due to their excellent learning and approximation capability. However, their forecasting performance significantly depends on several factors including initializing parameters, training algorithm, and NN structure. To minimize negative effects of these factors, this paper proposes a practically simple, yet effective and an efficient method to combine forecasts generated by NN models. The proposed method includes three main phases: (i) training NNs with different structures, (ii) selecting best NN models based on their forecasting performance for a validation set, and (iii) combination of forecasts for selected best NNs. Forecast combination is performed through calculating the mean of forecasts generated by best NN models. The performance of the proposed method is examined using real world data set. Comparative studies demonstrate that the accuracy of combined forecasts is significantly superior to those obtained from individual NN models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is currently no universally recommended and accepted method of data processing within the science of indirect calorimetry for either mixing chamber or breath-by-breath systems of expired gas analysis. Exercise physiologists were first surveyed to determine methods used to process oxygen consumption ([OV0312]O 2) data, and current attitudes to data processing within the science of indirect calorimetry. Breath-by-breath datasets obtained from indirect calorimetry during incremental exercise were then used to demonstrate the consequences of commonly used time, breath and digital filter post-acquisition data processing strategies. Assessment of the variability in breath-by-breath data was determined using multiple regression based on the independent variables ventilation (VE), and the expired gas fractions for oxygen and carbon dioxide, FEO 2 and FECO2, respectively. Based on the results of explanation of variance of the breath-by-breath [OV0312]O2 data, methods of processing to remove variability were proposed for time-averaged, breath-averaged and digital filter applications. Among exercise physiologists, the strategy used to remove the variability in sequential [OV0312]O2 measurements varied widely, and consisted of time averages (30 sec [38%], 60 sec [18%], 20 sec [11%], 15 sec [8%]), a moving average of five to 11 breaths (10%), and the middle five of seven breaths (7%). Most respondents indicated that they used multiple criteria to establish maximum [OV0312]O 2 ([OV0312]O2max) including: the attainment of age-predicted maximum heart rate (HRmax) [53%], respiratory exchange ratio (RER) >1.10 (49%) or RER >1.15 (27%) and a rating of perceived exertion (RPE) of >17, 18 or 19 (20%). The reasons stated for these strategies included their own beliefs (32%), what they were taught (26%), what they read in research articles (22%), tradition (13%) and the influence of their colleagues (7%). The combination of VE, FEO 2 and FECO2 removed 96-98% of [OV0312]O2 breath-by-breath variability in incremental and steady-state exercise [OV0312]O2 data sets, respectively. Correction of residual error in [OV0312]O2 datasets to 10% of the raw variability results from application of a 30-second time average, 15-breath running average, or a 0.04 Hz low cut-off digital filter. Thus, we recommend that once these data processing strategies are used, the peak or maximal value becomes the highest processed datapoint. Exercise physiologists need to agree on, and continually refine through empirical research, a consistent process for analysing data from indirect calorimetry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complex data is challenging to understand when it is represented as written communication even when it is structured in a table. How- ever, choosing to represent data in creative ways can aid our under- standing of complex ideas and patterns. In this regard, the creative industries have a great deal to offer data-intensive scholarly disci- plines. Music, for example, is not often used to interpret data, yet the rhythmic nature of music lends itself to the representation and anal- ysis of temporal data.Taking the music industry as a case study, this paper explores how data about historical live music gigs can be analysed, extend- ed and re-presented to create new insights. Using a unique process called ‘songification’ we demonstrate how enhanced auditory data design can provide a medium for aural intuition. The case study also illustrates the benefits of an expanded and inclusive view of research; in which computation and communication, method and media, in combination enable us to explore the larger question of how we can employ technologies to produce, represent, analyse, deliver and exchange knowledge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a method to classify EEG signals using features extracted by an integration of wavelet transform and the nonparametric Wilcoxon test. Orthogonal Haar wavelet coefficients are ranked based on the Wilcoxon test’s statistics. The most prominent discriminant wavelets are assembled to form a feature set that serves as inputs to the naïve Bayes classifier. Two benchmark datasets, named Ia and Ib, downloaded from the brain–computer interface (BCI) competition II are employed for the experiments. Classification performance is evaluated using accuracy, mutual information, Gini coefficient and F-measure. Widely used classifiers, including feedforward neural network, support vector machine, k-nearest neighbours, ensemble learning Adaboost and adaptive neuro-fuzzy inference system, are also implemented for comparisons. The proposed combination of Haar wavelet features and naïve Bayes classifier considerably dominates the competitive classification approaches and outperforms the best performance on the Ia and Ib datasets reported in the BCI competition II. Application of naïve Bayes also provides a low computational cost approach that promotes the implementation of a potential real-time BCI system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multimedia content understanding research requires rigorous approach to deal with the complexity of the data. At the crux of this problem is the method to deal with multilevel data whose structure exists at multiple scales and across data sources. A common example is modeling tags jointly with images to improve retrieval, classification and tag recommendation. Associated contextual observation, such as metadata, is rich that can be exploited for content analysis. A major challenge is the need for a principal approach to systematically incorporate associated media with the primary data source of interest. Taking a factor modeling approach, we propose a framework that can discover low-dimensional structures for a primary data source together with other associated information. We cast this task as a subspace learning problem under the framework of Bayesian nonparametrics and thus the subspace dimensionality and the number of clusters are automatically learnt from data instead of setting these parameters a priori. Using Beta processes as the building block, we construct random measures in a hierarchical structure to generate multiple data sources and capture their shared statistical at the same time. The model parameters are inferred efficiently using a novel combination of Gibbs and slice sampling. We demonstrate the applicability of the proposed model in three applications: image retrieval, automatic tag recommendation and image classification. Experiments using two real-world datasets show that our approach outperforms various state-of-the-art related methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a novel approach to gene selection based on a substantial modification of analytic hierarchy process (AHP). The modified AHP systematically integrates outcomes of individual filter methods to select the most informative genes for microarray classification. Five individual ranking methods including t-test, entropy, receiver operating characteristic (ROC) curve, Wilcoxon and signal to noise ratio are employed to rank genes. These ranked genes are then considered as inputs for the modified AHP. Additionally, a method that uses fuzzy standard additive model (FSAM) for cancer classification based on genes selected by AHP is also proposed in this paper. Traditional FSAM learning is a hybrid process comprising unsupervised structure learning and supervised parameter tuning. Genetic algorithm (GA) is incorporated in-between unsupervised and supervised training to optimize the number of fuzzy rules. The integration of GA enables FSAM to deal with the high-dimensional-low-sample nature of microarray data and thus enhance the efficiency of the classification. Experiments are carried out on numerous microarray datasets. Results demonstrate the performance dominance of the AHP-based gene selection against the single ranking methods. Furthermore, the combination of AHP-FSAM shows a great accuracy in microarray data classification compared to various competing classifiers. The proposed approach therefore is useful for medical practitioners and clinicians as a decision support system that can be implemented in the real medical practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a hybrid feature extraction method applied to mass spectrometry (MS) data for cancer classification. Haar wavelets are employed to transform MS data into orthogonal wavelet coefficients. The most prominent discriminant wavelets are then selected by genetic algorithm (GA) to form feature sets. The combination of wavelets and GA yields highly distinct feature sets that serve as inputs to classification algorithms. Experimental results show the robustness and significant dominance of the wavelet-GA against competitive methods. The proposed method therefore can be applied to cancer classification models that are useful as real clinical decision support systems for medical practitioners.