542 resultados para Multiple classification


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Online social networks connect millions of people around the globe. These electronic bonds make individuals comfortable with their behaviours. Such positive signs of sharing information is useful phenomena requires consideration to establish a socio-scientific effect. Recently, many web users have more than one social networking account. This means a user may hold multiple profiles which are stored in different Social Network Sites (SNNs). Maintaining these multiple online social network profiles is cumbersome and time-consuming [1]. In this paper we will propose a framework for the management of a user's multiple profiles. A demonstrator, called Multiple Profile Manager (MPM), will be showcased to illustrate how effective the framework will be.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article outlines the key recommendations of the Australian Law Reform Commission’s review of the National Classification Scheme, as outlined in its report Classification – Content Regulation and Convergent Media (ALRC, 2012). It identifies key contextual factors that underpin the need for reform of media classification laws and policies, including the fragmentation of regulatory responsibilities and the convergence of media platforms, content and services, as well as discussing the ALRC’s approach to law reform.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multiple choice (MC) examinations are frequently used for the summative assessment of large classes because of their ease of marking and their perceived objectivity. However, traditional MC formats usually lead to a surface approach to learning, and do not allow students to demonstrate the depth of their knowledge or understanding. For these reasons, we have trialled the incorporation of short answer (SA) questions into the final examination of two first year chemistry units, alongside MC questions. Students’ overall marks were expected to improve, because they were able to obtain partial marks for the SA questions. Although large differences in some individual students’ performance in the two sections of their examinations were observed, most students received a similar percentage mark for their MC as for their SA sections and the overall mean scores were unchanged. In-depth analysis of all responses to a specific question, which was used previously as a MC question and in a subsequent semester in SA format, indicates that the SA format can have weaknesses due to marking inconsistencies that are absent for MC questions. However, inclusion of SA questions improved student scores on the MC section in one examination, indicating that their inclusion may lead to different study habits and deeper learning. We conclude that questions asked in SA format must be carefully chosen in order to optimise the use of marking resources, both financial and human, and questions asked in MC format should be very carefully checked by people trained in writing MC questions. These results, in conjunction with an analysis of the different examination formats used in first year chemistry units, have shaped a recommendation on how to reliably and cost-effectively assess first year chemistry, while encouraging higher order learning outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Process modelling – the design and use of graphical documentations of an organisation’s business processes – is a key method to document and use information about business processes in organisational projects. Still, despite current interest in process modelling, this area of study still faces essential challenges. One of the key unanswered questions concerns the impact of process modelling in organisational practice. Process modelling initiatives call for tangible results in the form of returns on the substantial investments that organisations undertake to achieve improved processes. This study explores the impact of process model use on end-users and its contribution to organisational success. We posit that the use of conceptual models creates impact in organisational process teams. We also report on a set of case studies in which we explore tentative evidence for the development of impact of process model use. The results of this work provide a better understanding of process modelling impact from information practices and also lead to insights into how organisations should conduct process modelling initiatives in order to achieve an optimum return on their investment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reliable ambiguity resolution (AR) is essential to Real-Time Kinematic (RTK) positioning and its applications, since incorrect ambiguity fixing can lead to largely biased positioning solutions. A partial ambiguity fixing technique is developed to improve the reliability of AR, involving partial ambiguity decorrelation (PAD) and partial ambiguity resolution (PAR). Decorrelation transformation could substantially amplify the biases in the phase measurements. The purpose of PAD is to find the optimum trade-off between decorrelation and worst-case bias amplification. The concept of PAR refers to the case where only a subset of the ambiguities can be fixed correctly to their integers in the integer least-squares (ILS) estimation system at high success rates. As a result, RTK solutions can be derived from these integer-fixed phase measurements. This is meaningful provided that the number of reliably resolved phase measurements is sufficiently large for least-square estimation of RTK solutions as well. Considering the GPS constellation alone, partially fixed measurements are often insufficient for positioning. The AR reliability is usually characterised by the AR success rate. In this contribution an AR validation decision matrix is firstly introduced to understand the impact of success rate. Moreover the AR risk probability is included into a more complete evaluation of the AR reliability. We use 16 ambiguity variance-covariance matrices with different levels of success rate to analyse the relation between success rate and AR risk probability. Next, the paper examines during the PAD process, how a bias in one measurement is propagated and amplified onto many others, leading to more than one wrong integer and to affect the success probability. Furthermore, the paper proposes a partial ambiguity fixing procedure with a predefined success rate criterion and ratio-test in the ambiguity validation process. In this paper, the Galileo constellation data is tested with simulated observations. Numerical results from our experiment clearly demonstrate that only when the computed success rate is very high, the AR validation can provide decisions about the correctness of AR which are close to real world, with both low AR risk and false alarm probabilities. The results also indicate that the PAR procedure can automatically chose adequate number of ambiguities to fix at given high-success rate from the multiple constellations instead of fixing all the ambiguities. This is a benefit that multiple GNSS constellations can offer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the increasing number of stratospheric particles available for study (via the U2 and/or WB57F collections), it is essential that a simple, yet rational, classification scheme be developed for general use. Such a scheme should be applicable to all particles collected from the stratosphere, rather than limited to only extraterrestial or chemical sub-groups. Criteria for the efficacy of such a scheme would include: (a) objectivity , (b) ease of use, (c) acceptance within the broader scientific community and (d) how well the classification provides intrinsic categories which are consistent with our knowledge of particle types present in the stratosphere.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several investigators have recently proposed classification schemes for stratospheric dust particles [1-3]. In addition, extraterrestrial materials within stratospheric dust collections may be used as a measure of micrometeorite flux [4]. However, little attention has been given to the problems of the stratospheric collection as a whole. Some of these problems include: (a) determination of accurate particle abundances at a given point in time; (b) the extent of bias in the particle selection process; (c) the variation of particle shape and chemistry with size; (d) the efficacy of proposed classification schemes and (e) an accurate determination of physical parameters associated with the particle collection process (e.g. minimum particle size collected, collection efficiency, variation of particle density with time). We present here preliminary results from SEM, EDS and, where appropriate, XRD analysis of all of the particles from a collection surface which sampled the stratosphere between 18 and 20km in altitude. Determinations of particle densities from this study may then be used to refine models of the behavior of particles in the stratosphere [5].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of public service broadcasters (PSBs) in the 20th century was framed around debates about its difference compared to commercial broadcasting. These debates navigated between two poles. One concerned the relationship between non‐commercial sources of funding and the role played by statutory Charters as guarantors of the independence of PSBs. The other concerned the relationship between PSBs being both a complementary and a comprehensive service, although there are tensions inherent in this duality. In the 21st century, as reconfigured public service media organisations (PSMs) operate across multiple platforms in a convergent media environment, how are these debates changing, if at all? Is the case for PSM “exceptionalism” changed with Web‐based services, catch‐up TV, podcasting, ancillary product sales, and commissioning of programs from external sources in order to operate in highly diversified cross‐media environments? Do the traditional assumptions about non‐commercialism still hold as the basis for different forms of PSM governance and accountability? This paper will consider the question of PSM exceptionalism in the context of three reviews into Australian media that took place over 2011‐2012: the Convergence Review undertaken through the Department of Broadband, Communications and the Digital Economy; the National Classification Scheme Review undertaken by the Australian Law Reform Commission; and the Independent Media Inquiry that considered the future of news and journalism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Evaluation of scapular posture is a fundamental component in the clinical evaluation of the upper quadrant. This study examined the intrarater reliability of scapular posture ratings. Methods: A test-retest reliability investigation was undertaken with one week between assessment sessions. At each session physical therapists conducted visual assessments of scapula posture (relative to the thorax) in five different scapula postural planes (plane of scapula, sagittal plane, transverse plane, horizontal plane, and vertical plane). These five plane ratings were performed for four different scapular posture perturbating conditions (rest, isometric shoulder; flexion, abduction, and external rotation). Results. A total of 100 complete scapular posture ratings (50 left, 50 right) were undertaken at each assessment. The observed agreement between the test and retest postural plane ratings ranged from 59% to 87%; 16 of the 20 plane-condition combinations exceeded 75% observed agreement. Kappa (and prevalence adjusted bias adjusted kappa) values were inconsistent across the postural planes and perturbating conditions. Conclusions: This investigation generally revealed fair to moderate intrarater reliability in the rating of scapular posture by visual inspection. However, enough disagreement between assessments was present to warrant caution when interpreting perceived changes in scapula position between longitudinal assessments using visual inspection alone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Associations between single nucleotide polymorphisms (SNPs) at 5p15 and multiple cancer types have been reported. We have previously shown evidence for a strong association between prostate cancer (PrCa) risk and rs2242652 at 5p15, intronic in the telomerase reverse transcriptase (TERT) gene that encodes TERT. To comprehensively evaluate the association between genetic variation across this region and PrCa, we performed a fine-mapping analysis by genotyping 134 SNPs using a custom Illumina iSelect array or Sequenom MassArray iPlex, followed by imputation of 1094 SNPs in 22 301 PrCa cases and 22 320 controls in The PRACTICAL consortium. Multiple stepwise logistic regression analysis identified four signals in the promoter or intronic regions of TERT that independently associated with PrCa risk. Gene expression analysis of normal prostate tissue showed evidence that SNPs within one of these regions also associated with TERT expression, providing a potential mechanism for predisposition to disease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose and evaluate a speaker attribution system using a complete-linkage clustering method. Speaker attribution refers to the annotation of a collection of spoken audio based on speaker identities. This can be achieved using diarization and speaker linking. The main challenge associated with attribution is achieving computational efficiency when dealing with large audio archives. Traditional agglomerative clustering methods with model merging and retraining are not feasible for this purpose. This has motivated the use of linkage clustering methods without retraining. We first propose a diarization system using complete-linkage clustering and show that it outperforms traditional agglomerative and single-linkage clustering based diarization systems with a relative improvement of 40% and 68%, respectively. We then propose a complete-linkage speaker linking system to achieve attribution and demonstrate a 26% relative improvement in attribution error rate (AER) over the single-linkage speaker linking approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A review of 291 catalogued particles on the bases of particle size, shape, bulk chemistry, and texture is used to establish a reliable taxonomy. Extraterrestrial materials occur in three defined categories: spheres, aggregates and fragments. Approximately 76% of aggregates are of probable extraterrestrial origin, whereas spheres contain the smallest amount of extraterrestrial material (approx 43%). -B.M.