55 resultados para Weakly Compact Sets


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper evaluates the efficiency of a number of popular corpus-based distributional models in performing discovery on very large document sets, including online collections. Literature-based discovery is the process of identifying previously unknown connections from text, often published literature, that could lead to the development of new techniques or technologies. Literature-based discovery has attracted growing research interest ever since Swanson's serendipitous discovery of the therapeutic effects of fish oil on Raynaud's disease in 1986. The successful application of distributional models in automating the identification of indirect associations underpinning literature-based discovery has been heavily demonstrated in the medical domain. However, we wish to investigate the computational complexity of distributional models for literature-based discovery on much larger document collections, as they may provide computationally tractable solutions to tasks including, predicting future disruptive innovations. In this paper we perform a computational complexity analysis on four successful corpus-based distributional models to evaluate their fit for such tasks. Our results indicate that corpus-based distributional models that store their representations in fixed dimensions provide superior efficiency on literature-based discovery tasks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The current state of knowledge in relation to first flush does not provide a clear understanding of the role of rainfall and catchment characteristics in influencing this phenomenon. This is attributed to the inconsistent findings from research studies due to the unsatisfactory selection of first flush indicators and how first flush is defined. The research study discussed in this thesis provides the outcomes of a comprehensive analysis on the influence of rainfall and catchment characteristics on first flush behaviour in residential catchments. Two sets of first flush indicators are introduced in this study. These indicators were selected such that they are representative in explaining in a systematic manner the characteristics associated with first flush. Stormwater samples and rainfall-runoff data were collected and recorded from stormwater monitoring stations established at three urban catchments at Coomera Waters, Gold Coast, Australia. In addition, historical data were also used to support the data analysis. Three water quality parameters were analysed, namely, total suspended solids (TSS), total phosphorus (TP) and total nitrogen (TN). The data analyses were primarily undertaken using multi criteria decision making methods, PROMETHEE and GAIA. Based on the data obtained, the pollutant load distribution curve (LV) was determined for the individual rainfall events and pollutant types. Accordingly, two sets of first flush indicators were derived from the curve, namely, cumulative load wash-off for every 10% of runoff volume interval (interval first flush indicators or LV) from the beginning of the event and the actual pollutant load wash-off during a 10% increment in runoff volume (section first flush indicators or P). First flush behaviour showed significant variation with pollutant types. TSS and TP showed consistent first flush behaviour. However, the dissolved fraction of TN showed significant differences to TSS and TP first flush while particulate TN showed similarities. Wash-off of TSS, TP and particulate TN during the first 10% of the runoff volume showed no influence from corresponding rainfall intensity. This was attributed to the wash-off of weakly adhered solids on the catchment surface referred to as "short term pollutants" or "weakly adhered solids" load. However, wash-off after 10% of the runoff volume showed dependency on the rainfall intensity. This is attributed to the wash-off of strongly adhered solids being exposed when the weakly adhered solids diminish. The wash-off process was also found to depend on rainfall depth at the end part of the event as the strongly adhered solids are loosened due to impact of rainfall in the earlier part of the event. Events with high intensity rainfall bursts after 70% of the runoff volume did not demonstrate first flush behaviour. This suggests that rainfall pattern plays a critical role in the occurrence of first flush. Rainfall intensity (with respect to the rest of the event) that produces 10% to 20% runoff volume play an important role in defining the magnitude of the first flush. Events can demonstrate high magnitude first flush when the rainfall intensity occurring between 10% and 20% of the runoff volume is comparatively high while low rainfall intensities during this period produces low magnitude first flush. For events with first flush, the phenomenon is clearly visible up to 40% of the runoff volume. This contradicts the common definition that first flush only exists, if for example, 80% of the pollutant mass is transported in the first 30% of runoff volume. First flush behaviour for TN is different compared to TSS and TP. Apart from rainfall characteristics, the composition and the availability of TN on the catchment also play an important role in first flush. The analysis confirmed that events with low rainfall intensity can produce high magnitude first flush for the dissolved fraction of TN, while high rainfall intensity produce low dissolved TN first flush. This is attributed to the source limiting behaviour of dissolved TN wash-off where there is high wash-off during the initial part of a rainfall event irrespective of the intensity. However, for particulate TN, the influence of rainfall intensity on first flush characteristics is similar to TSS and TP. The data analysis also confirmed that first flush can occur as high magnitude first flush, low magnitude first flush or non existence of first flush. Investigation of the influence of catchment characteristics on first flush found that the key factors that influence the phenomenon are the location of the pollutant source, spatial distribution of the pervious and impervious surfaces in the catchment, drainage network layout and slope of the catchment. This confirms that first flush phenomenon cannot be evaluated based on a single or a limited set of parameters as a number of catchment characteristics should be taken into account. Catchments where the pollutant source is located close to the outlet, a high fraction of road surfaces, short travel time to the outlet, with steep slopes can produce high wash-off load during the first 50% of the runoff volume. Rainfall characteristics have a comparatively dominant impact on the wash-off process compared to the catchment characteristics. In addition, the pollutant characteristics also should be taken into account in designing stormwater treatment systems due to different wash-off behaviour. Analysis outcomes confirmed that there is a high TSS load during the first 20% of the runoff volume followed by TN which can extend up to 30% of the runoff volume. In contrast, high TP load can exist during the initial and at the end part of a rainfall event. This is related to the composition of TP available for the wash-off.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Application of 'advanced analysis' methods suitable for non-linear analysis and design of steel frame structures permits direct and accurate determination of ultimate system strengths, without resort to simplified elastic methods of analysis and semi-empirical specification equations. However, the application of advanced analysis methods has previously been restricted to steel frames comprising only compact sections that are not influenced by the effects of local buckling. A research project has been conducted with the aim of developing concentrated plasticity methods suitable for practical advanced analysis of steel frame structures comprising non-compact sections. A primary objective was to produce a comprehensive range of new distributed plasticity analytical benchmark solutions for verification of the concentrated plasticity methods. A distributed plasticity model was developed using shell finite elements to explicitly account for the effects of gradual yielding and spread of plasticity, initial geometric imperfections, residual stresses and local buckling deformations. The model was verified by comparison with large-scale steel frame test results and a variety of existing analytical benchmark solutions. This paper presents a description of the distributed plasticity model and details of the verification study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Application of `advanced analysis' methods suitable for non-linear analysis and design of steel frame structures permits direct and accurate determination of ultimate system strengths, without resort to simplified elastic methods of analysis and semi-empirical specification equations. However, the application of advanced analysis methods has previously been restricted to steel frames comprising only compact sections that are not influenced by the effects of local buckling. A concentrated plasticity method suitable for practical advanced analysis of steel frame structures comprising non-compact sections is presented in this paper. The pseudo plastic zone method implicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. The accuracy and precision of the method for the analysis of steel frames comprising non-compact sections is established by comparison with a comprehensive range of analytical benchmark frame solutions. The pseudo plastic zone method is shown to be more accurate and precise than the conventional individual member design methods based on elastic analysis and specification equations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Application of 'advanced analysis' methods suitable for non-linear analysis and design of steel frame structures permits direct and accurate determination of ultimate system strengths, without resort to simplified elastic methods of analysis and semi-empirical specification equations. However, the application of advanced analysis methods has previously been restricted to steel frames comprising only compact sections that are not influenced by the effects of local buckling. A research project has been conducted with the aim of developing concentrated plasticity methods suitable for practical advanced analysis of steel frame structures comprising non-compact sections. A series of large-scale tests were performed in order to provide experimental results for verification of the new analytical models. Each of the test frames comprised non-compact sections, and exhibited significant local buckling behaviour prior to failure. This paper presents details of the test program including the test specimens, set-up and instrumentation, procedure, and results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present large, accurately calibrated and time-synchronized data sets, gathered outdoors in controlled and variable environmental conditions, using an unmanned ground vehicle (UGV), equipped with a wide variety of sensors. These include four 2D laser scanners, a radar scanner, a color camera and an infrared camera. It provides a full description of the system used for data collection and the types of environments and conditions in which these data sets have been gathered, which include the presence of airborne dust, smoke and rain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditional nearest points methods use all the samples in an image set to construct a single convex or affine hull model for classification. However, strong artificial features and noisy data may be generated from combinations of training samples when significant intra-class variations and/or noise occur in the image set. Existing multi-model approaches extract local models by clustering each image set individually only once, with fixed clusters used for matching with various image sets. This may not be optimal for discrimination, as undesirable environmental conditions (eg. illumination and pose variations) may result in the two closest clusters representing different characteristics of an object (eg. frontal face being compared to non-frontal face). To address the above problem, we propose a novel approach to enhance nearest points based methods by integrating affine/convex hull classification with an adapted multi-model approach. We first extract multiple local convex hulls from a query image set via maximum margin clustering to diminish the artificial variations and constrain the noise in local convex hulls. We then propose adaptive reference clustering (ARC) to constrain the clustering of each gallery image set by forcing the clusters to have resemblance to the clusters in the query image set. By applying ARC, noisy clusters in the query set can be discarded. Experiments on Honda, MoBo and ETH-80 datasets show that the proposed method outperforms single model approaches and other recent techniques, such as Sparse Approximated Nearest Points, Mutual Subspace Method and Manifold Discriminant Analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This project improved the detection and classification of very weakly expressed RhD variants in the Australian blood donor panel and contributed to the knowledge of anti-D reactivity patterns of RHD alleles that are undescribed. As such, the management of donations possessing these RHD alleles can be improved upon and the overall safety of transfusion medicine pertaining to the Rh blood group system will be increased. Future projects at ARCBS will be able to utilise the procedures developed in this project, thereby decreasing throughput time. The specificity of current testing will be improved and the need for outsourced RHD testing diminished.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Standard signature schemes are usually designed only to achieve weak unforgeability – i.e. preventing forgery of signatures on new messages not previously signed. However, most signature schemes are randomised and allow many possible signatures for a single message. In this case, it may be possible to produce a new signature on a previously signed message. Some applications require that this type of forgery also be prevented – this requirement is called strong unforgeability. At PKC2006, Boneh Shen and Waters presented an efficient transform based on any randomised trapdoor hash function which converts a weakly unforgeable signature into a strongly unforgeable signature and applied it to construct a strongly unforgeable signature based on the CDH problem. However, the transform of Boneh et al only applies to a class of so-called partitioned signatures. Although many schemes fall in this class, some do not, for example the DSA signature. Hence it is natural to ask whether one can obtain a truly generic efficient transform based on any randomised trapdoor hash function which converts any weakly unforgeable signature into a strongly unforgeable one. We answer this question in the positive by presenting a simple modification of the Boneh-Shen-Waters transform. Our modified transform uses two randomised trapdoor hash functions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analytically or computationally intractable likelihood functions can arise in complex statistical inferential problems making them inaccessible to standard Bayesian inferential methods. Approximate Bayesian computation (ABC) methods address such inferential problems by replacing direct likelihood evaluations with repeated sampling from the model. ABC methods have been predominantly applied to parameter estimation problems and less to model choice problems due to the added difficulty of handling multiple model spaces. The ABC algorithm proposed here addresses model choice problems by extending Fearnhead and Prangle (2012, Journal of the Royal Statistical Society, Series B 74, 1–28) where the posterior mean of the model parameters estimated through regression formed the summary statistics used in the discrepancy measure. An additional stepwise multinomial logistic regression is performed on the model indicator variable in the regression step and the estimated model probabilities are incorporated into the set of summary statistics for model choice purposes. A reversible jump Markov chain Monte Carlo step is also included in the algorithm to increase model diversity for thorough exploration of the model space. This algorithm was applied to a validating example to demonstrate the robustness of the algorithm across a wide range of true model probabilities. Its subsequent use in three pathogen transmission examples of varying complexity illustrates the utility of the algorithm in inferring preference of particular transmission models for the pathogens.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Built environment interventions designed to reduce non-communicable diseases and health inequity, complement urban planning agendas focused on creating more ‘liveable’, compact, pedestrian-friendly, less automobile dependent and more socially inclusive cities.However, what constitutes a ‘liveable’ community is not well defined. Moreover, there appears to be a gap between the concept and delivery of ‘liveable’ communities. The recently funded NHMRC Centre of Research Excellence (CRE) in Healthy Liveable Communities established in early 2014, has defined ‘liveability’ from a social determinants of health perspective. Using purpose-designed multilevel longitudinal data sets, it addresses five themes that address key evidence-base gaps for building healthy and liveable communities. The CRE in Healthy Liveable Communities seeks to generate and exchange new knowledge about: 1) measurement of policy-relevant built environment features associated with leading non-communicable disease risk factors (physical activity, obesity) and health outcomes (cardiovascular disease, diabetes) and mental health; 2) causal relationships and thresholds for built environment interventions using data from longitudinal studies and natural experiments; 3) thresholds for built environment interventions; 4) economic benefits of built environment interventions designed to influence health and wellbeing outcomes; and 5) factors, tools, and interventions that facilitate the translation of research into policy and practice. This evidence is critical to inform future policy and practice in health, land use, and transport planning. Moreover, to ensure policy-relevance and facilitate research translation, the CRE in Healthy Liveable Communities builds upon ongoing, and has established new, multi-sector collaborations with national and state policy-makers and practitioners. The symposium will commence with a brief introduction to embed the research within an Australian health and urban planning context, as well as providing an overall outline of the CRE in Healthy Liveable Communities, its structure and team. Next, an overview of the five research themes will be presented. Following these presentations, the Discussant will consider the implications of the research and opportunities for translation and knowledge exchange. Theme 2 will establish whether and to what extent the neighbourhood environment (built and social) is causally related to physical and mental health and associated behaviours and risk factors. In particular, research conducted as part of this theme will use data from large-scale, longitudinal-multilevel studies (HABITAT, RESIDE, AusDiab) to examine relationships that meet causality criteria via statistical methods such as longitudinal mixed-effect and fixed-effect models, multilevel and structural equation models; analyse data on residential preferences to investigate confounding due to neighbourhood self-selection and to use measurement and analysis tools such as propensity score matching and ‘within-person’ change modelling to address confounding; analyse data about individual-level factors that might confound, mediate or modify relationships between the neighbourhood environment and health and well-being (e.g., psychosocial factors, knowledge, perceptions, attitudes, functional status), and; analyse data on both objective neighbourhood characteristics and residents’ perceptions of these objective features to more accurately assess the relative contribution of objective and perceptual factors to outcomes such as health and well-being, physical activity, active transport, obesity, and sedentary behaviour. At the completion of the Theme 2, we will have demonstrated and applied statistical methods appropriate for determining causality and generated evidence about causal relationships between the neighbourhood environment, health, and related outcomes. This will provide planners and policy makers with a more robust (valid and reliable) basis on which to design healthy communities.