424 resultados para Blog datasets


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scalable high-resolution tiled display walls are becoming increasingly important to decision makers and researchers because high pixel counts in combination with large screen areas facilitate content rich, simultaneous display of computer-generated visualization information and high-definition video data from multiple sources. This tutorial is designed to cater for new users as well as researchers who are currently operating tiled display walls or 'OptiPortals'. We will discuss the current and future applications of display wall technology and explore opportunities for participants to collaborate and contribute in a growing community. Multiple tutorial streams will cover both hands-on practical development, as well as policy and method design for embedding these technologies into the research process. Attendees will be able to gain an understanding of how to get started with developing similar systems themselves, in addition to becoming familiar with typical applications and large-scale visualisation techniques. Presentations in this tutorial will describe current implementations of tiled display walls that highlight the effective usage of screen real-estate with various visualization datasets, including collaborative applications such as visualcasting, classroom learning and video conferencing. A feature presentation for this tutorial will be given by Jurgen Schulze from Calit2 at the University of California, San Diego. Jurgen is an expert in scientific visualization in virtual environments, human-computer interaction, real-time volume rendering, and graphics algorithms on programmable graphics hardware.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Academic libraries have been acquiring ebooks for their collections for a number of years, but the uptake by some users has been curtailed by the limitations of screen reading on a traditional PC or laptop. Ebook readers promise to take us into a new phase of ebook development. Innovations include: wireless connectivity, electronic paper, increased battery life, and customisable displays. The recent rapid take-up of ebook readers in the United States, particularly Amazon’s Kindle, suggests that they may about to gain mass-market acceptance. A bewildering number of ebook readers are being promoted by companies eager to gain market share. In addition, each month seems to bring a new ebook reader or a new model of an existing device. Library administrators are faced with the challenge of separating the hype from the reality and deciding when the time is right to invest in and support these new technologies. The Library at QUT, in particular the QUT Library Ebook Reference Group (ERG) has been closely following developments in ebooks and ebook reader technologies. During mid 2010 QUT Library undertook a trial of a range of ebook readers available to Australian consumers. Each of the ebook readers acquired was evaluated by members of the QUT Library ERG and two student focus groups. Major criteria for evaluation included usability, functionality, accessibility and compatibility with QUT Library’s existing ebook collection. The two student focus groups evaluated the ebook readers mostly according to the criteria of usability and functionality. This paper will discuss these evaluations and outline a recommendation about which type (or types) of ebook readers could be acquired and made available for lending to students.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

NF-Y is a heterotrimeric transcription factor complex. Each of the NF-Y subunits (NF-YA, NF-YB and NF-YC) in plants is encoded by multiple genes. Quantitative RT-PCR analysis revealed that five wheat NF-YC members (TaNF-YC5, 8, 9, 11 & 12) were upregulated by light in both the leaf and seedling shoot. Co-expression analysis of Affymetrix wheat genome array datasets revealed that transcript levels of a large number of genes were consistently correlated with those of the TaNF-YC11 and TaNF-YC8 genes in 3-4 separate Affymetrix array datasets. TaNF-YC11-correlated transcripts were significantly enriched with the Gene Ontology term photosynthesis. Sequence analysis in the promoters of TaNF-YC11-correlated genes revealed the presence of putative NF-Y complex binding sites (CCAAT motifs). Quantitative RT-PCR analysis of a subset of potential TaNF-YC11 target genes showed that ten out of the thirteen genes were also light-upregulated in both the leaf and seedling shoot and had significantly correlated expression profiles with TaNF-YC11. The potential target genes for TaNF-YC11 include subunit members from all four thylakoid membrane bound complexes required for the conversion of solar energy into chemical energy and rate limiting enzymes in the Calvin cycle. These data indicate that TaNF-YC11 is potentially involved in regulation of photosynthesis-related genes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nuclear Factor Y (NF-Y) transcription factor is a heterotrimer comprised of three subunits: NF-YA, NF-YB and NF-YC. Each of the three subunits in plants is encoded by multiple genes with differential expression profiles, implying the functional specialisation of NF-Y subunit members in plants. In this study, we investigated the roles of NF-YB members in the light-mediated regulation of photosynthesis genes. We identified two NF-YB members from Triticum aestivum (TaNF-YB3 & 7) which were markedly upregulated by light in the leaves and seedling shoots using quantitative RT-PCR. A genome-wide coexpression analysis of multiple Affymetrix Wheat Genome Array datasets revealed that TaNF-YB3-coexpressed transcripts were highly enriched with the Gene Ontology term photosynthesis. Transgenic wheat lines constitutively overexpressing TaNF-YB3 had a significant increase in the leaf chlorophyll content, photosynthesis rate and early growth rate. Quantitative RT-PCR analysis showed that the expression levels of a number of TaNF-YB3-coexpressed transcripts were elevated in the transgenic wheat lines. The mRNA level of TaGluTR encoding glutamyl-tRNA reductase, which catalyses the rate limiting step of the chlorophyll biosynthesis pathway, was significantly increased in the leaves of the transgenic wheat. Significant increases in the expression level in the transgenic plant leaves were also observed for four photosynthetic apparatus genes encoding chlorophyll a/b-binding proteins (Lhca4 and Lhcb4) and photosystem I reaction center subunits (subunit K and subunit N), as well as for a gene coding for chloroplast ATP synthase  subunit. These results indicate that TaNF-YB3 is involved in the positive regulation of a number of photosynthesis genes in wheat.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: to assess the accuracy of data linkage across the spectrum of emergency care in the absence of a unique patient identifier, and to use the linked data to examine service delivery outcomes in an emergency department setting. Design: automated data linkage and manual data linkage were compared to determine their relative accuracy. Data were extracted from three separate health information systems: ambulance, ED and hospital inpatients, then linked to provide information about the emergency journey of each patient. The linking was done manually through physical review of records and automatically using a data linking tool (Health Data Integration) developed by the CSIRO. Match rate and quality of the linking were compared. Setting: 10, 835 patient presentations to a large, regional teaching hospital ED over a two month period (August-September 2007). Results: comparison of the manual and automated linkage outcomes for each pair of linked datasets demonstrated a sensitivity of between 95% and 99%; a specificity of between 75% and 99%; and a positive predictive value of between 88% and 95%. Conclusions: Our results indicate that automated linking provides a sound basis for health service analysis, even in the absence of a unique patient identifier. The use of an automated linking tool yields accurate data suitable for planning and service delivery purposes and enables the data to be linked regularly to examine service delivery outcomes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As organizations reach higher levels of Business Process Management maturity, they tend to accumulate large collections of process models. These repositories may contain thousands of activities and be managed by different stakeholders with varying skills and responsibilities. However, while being of great value, these repositories induce high management costs. Thus, it becomes essential to keep track of the various model versions as they may mutually overlap, supersede one another and evolve over time. We propose an innovative versioning model, and associated storage structure, specifically designed to maximize sharing across process models and process model versions, reduce conflicts in concurrent edits and automatically handle controlled change propagation. The focal point of this technique is to version single process model fragments, rather than entire process models. Indeed empirical evidence shows that real-life process model repositories have numerous duplicate fragments. Experiments on two industrial datasets confirm the usefulness of our technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Copyright protects much of the creative, cultural, educational, scientific and informational material generated by federal, State/Territory and local governments and their constituent departments and agencies. Governments at all levels develop, manage and distribute a vast array of materials in the form of documents, reports, websites, datasets and databases on CD or DVD and files that can be downloaded from a website. Under the Copyright Act 1968 (Cth), with few exceptions government copyright is treated the same as copyright owned by non-government parties insofar as the range of protected materials and the exclusive proprietary rights attaching to them are concerned. However, the rationale for recognizing copyright in public sector materials and vesting ownership of copyright in governments is fundamentally different to the main rationales underpinning copyright generally. The central justification for recognizing Crown copyright is to ensure that government documents and materials created for public administrative purposes are disseminated in an accurate and reliable form. Consequently, the exclusive rights held by governments as copyright owners must be exercised in a manner consistent with the rationale for conferring copyright ownership on them. Since Crown copyright exists primarily to ensure that documents and materials produced for use in the conduct of government are circulated in an accurate and reliable form, governments should exercise their exclusive rights to ensure that their copyright materials are made available for access and reuse, in accordance with any laws and policies relating to access to public sector materials. While copyright law vests copyright owners with extensive bundles of exclusive rights which can be exercised to prevent others making use of the copyright material, in the case of Crown copyright materials these rights should rarely be asserted by government to deviate from the general rule that Crown copyright materials will be available for “full and free reproduction” by the community at large.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The traditional Vector Space Model (VSM) is not able to represent both the structure and the content of XML documents. This paper introduces a novel method of representing XML documents in a Tensor Space Model (TSM) and then utilizing it for clustering. Empirical analysis shows that the proposed method is scalable for large-sized datasets; as well, the factorized matrices produced from the proposed method help to improve the quality of clusters through the enriched document representation of both structure and content information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information overload has become a serious issue for web users. Personalisation can provide effective solutions to overcome this problem. Recommender systems are one popular personalisation tool to help users deal with this issue. As the base of personalisation, the accuracy and efficiency of web user profiling affects the performances of recommender systems and other personalisation systems greatly. In Web 2.0, the emerging user information provides new possible solutions to profile users. Folksonomy or tag information is a kind of typical Web 2.0 information. Folksonomy implies the users‘ topic interests and opinion information. It becomes another source of important user information to profile users and to make recommendations. However, since tags are arbitrary words given by users, folksonomy contains a lot of noise such as tag synonyms, semantic ambiguities and personal tags. Such noise makes it difficult to profile users accurately or to make quality recommendations. This thesis investigates the distinctive features and multiple relationships of folksonomy and explores novel approaches to solve the tag quality problem and profile users accurately. Harvesting the wisdom of crowds and experts, three new user profiling approaches are proposed: folksonomy based user profiling approach, taxonomy based user profiling approach, hybrid user profiling approach based on folksonomy and taxonomy. The proposed user profiling approaches are applied to recommender systems to improve their performances. Based on the generated user profiles, the user and item based collaborative filtering approaches, combined with the content filtering methods, are proposed to make recommendations. The proposed new user profiling and recommendation approaches have been evaluated through extensive experiments. The effectiveness evaluation experiments were conducted on two real world datasets collected from Amazon.com and CiteULike websites. The experimental results demonstrate that the proposed user profiling and recommendation approaches outperform those related state-of-the-art approaches. In addition, this thesis proposes a parallel, scalable user profiling implementation approach based on advanced cloud computing techniques such as Hadoop, MapReduce and Cascading. The scalability evaluation experiments were conducted on a large scaled dataset collected from Del.icio.us website. This thesis contributes to effectively use the wisdom of crowds and expert to help users solve information overload issues through providing more accurate, effective and efficient user profiling and recommendation approaches. It also contributes to better usages of taxonomy information given by experts and folksonomy information contributed by users in Web 2.0.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social tags are an important information source in Web 2.0. They can be used to describe users’ topic preferences as well as the content of items to make personalized recommendations. However, since tags are arbitrary words given by users, they contain a lot of noise such as tag synonyms, semantic ambiguities and personal tags. Such noise brings difficulties to improve the accuracy of item recommendations. To eliminate the noise of tags, in this paper we propose to use the multiple relationships among users, items and tags to find the semantic meaning of each tag for each user individually. With the proposed approach, the relevant tags of each item and the tag preferences of each user are determined. In addition, the user and item-based collaborative filtering combined with the content filtering approach are explored. The effectiveness of the proposed approaches is demonstrated in the experiments conducted on real world datasets collected from Amazon.com and citeULike website.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Item folksonomy or tag information is a kind of typical and prevalent web 2.0 information. Item folksonmy contains rich opinion information of users on item classifications and descriptions. It can be used as another important information source to conduct opinion mining. On the other hand, each item is associated with taxonomy information that reflects the viewpoints of experts. In this paper, we propose to mine for users’ opinions on items based on item taxonomy developed by experts and folksonomy contributed by users. In addition, we explore how to make personalized item recommendations based on users’ opinions. The experiments conducted on real word datasets collected from Amazon.com and CiteULike demonstrated the effectiveness of the proposed approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 2005, Stephen Abram, vice president of Innovation at SirsiDynix, challenged library and information science (LIS) professionals to start becoming “librarian 2.0.” In the last few years, discussion and debate about the “core competencies” needed by librarian 2.0 have appeared in the “biblioblogosphere” (blogs written by LIS professionals). However, beyond these informal blog discussions few systematic and empirically based studies have taken place. A project funded by the Australian Learning and Teaching Council fills this gap. The project identifies the key skills, knowledge, and attributes required by “librarian 2.0.” Eighty-one members of the Australian LIS profession participated in a series of focus groups. Eight themes emerged as being critical to “librarian 2.0”: technology, communication, teamwork, user focus, business savvy, evidence based practice, learning and education, and personal traits. Guided by these findings interviews with 36 LIS educators explored the current approaches used within contemporary LIS education to prepare graduates to become “librarian 2.0”. This video presents an example of ‘great practice’ in current LIS educative practice in helping to foster web 2.0 professionals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 2005, Stephen Abram, vice president of Innovation at SirsiDynix, challenged library and information science (LIS) professionals to start becoming “librarian 2.0.” In the last few years, discussion and debate about the “core competencies” needed by librarian 2.0 have appeared in the “biblioblogosphere” (blogs written by LIS professionals). However, beyond these informal blog discussions few systematic and empirically based studies have taken place. A project funded by the Australian Learning and Teaching Council fills this gap. The project identifies the key skills, knowledge, and attributes required by “librarian 2.0.” Eighty-one members of the Australian LIS profession participated in a series of focus groups. Eight themes emerged as being critical to “librarian 2.0”: technology, communication, teamwork, user focus, business savvy, evidence based practice, learning and education, and personal traits. Guided by these findings interviews with 36 LIS educators explored the current approaches used within contemporary LIS education to prepare graduates to become “librarian 2.0”. This video presents an example of ‘great practice’ in current LIS education as it strives to foster web 2.0 professionals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 2005, Stephen Abram, vice president of Innovation at SirsiDynix, challenged library and information science (LIS) professionals to start becoming “librarian 2.0.” In the last few years, discussion and debate about the “core competencies” needed by librarian 2.0 have appeared in the “biblioblogosphere” (blogs written by LIS professionals). However, beyond these informal blog discussions few systematic and empirically based studies have taken place. A project funded by the Australian Learning and Teaching Council fills this gap. The project identifies the key skills, knowledge, and attributes required by “librarian 2.0.” Eighty-one members of the Australian LIS profession participated in a series of focus groups. Eight themes emerged as being critical to “librarian 2.0”: technology, communication, teamwork, user focus, business savvy, evidence based practice, learning and education, and personal traits. Guided by these findings interviews with 36 LIS educators explored the current approaches used within contemporary LIS education to prepare graduates to become “librarian 2.0”. This video presents an example of ‘great practice’ in current LIS education as it strives to foster web 2.0 professionals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distributed Denial-of-Service (DDoS) attacks continue to be one of the most pernicious threats to the delivery of services over the Internet. Not only are DDoS attacks present in many guises, they are also continuously evolving as new vulnerabilities are exploited. Hence accurate detection of these attacks still remains a challenging problem and a necessity for ensuring high-end network security. An intrinsic challenge in addressing this problem is to effectively distinguish these Denial-of-Service attacks from similar looking Flash Events (FEs) created by legitimate clients. A considerable overlap between the general characteristics of FEs and DDoS attacks makes it difficult to precisely separate these two classes of Internet activity. In this paper we propose parameters which can be used to explicitly distinguish FEs from DDoS attacks and analyse two real-world publicly available datasets to validate our proposal. Our analysis shows that even though FEs appear very similar to DDoS attacks, there are several subtle dissimilarities which can be exploited to separate these two classes of events.