953 resultados para Annotation Tag


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dwell time at the busway station has a significant effect on bus capacity and delay. Dwell time has conventionally been estimated using models developed on the basis of field survey data. However field survey is resource and cost intensive, so dwell time estimation based on limited observations can be somewhat inaccurate. Most public transport systems are now equipped with Automatic Passenger Count (APC) and/or Automatic Fare Collection (AFC) systems. AFC in particular reduces on-board ticketing time, driver’s work load and ultimately reduces bus dwell time. AFC systems can record all passenger transactions providing transit agencies with access to vast quantities of data. AFC data provides transaction timestamps, however this information differs from dwell time because passengers may tag on or tag off at times other than when doors open and close. This research effort contended that models could be developed to reliably estimate dwell time distributions when measured distributions of transaction times are known. Development of the models required calibration and validation using field survey data of actual dwell times, and an appreciation of another component of transaction time being bus time in queue. This research develops models for a peak period and off peak period at a busway station on the South East Busway (SEB) in Brisbane, Australia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we describe a method to represent and discover adversarial group behavior in a continuous domain. In comparison to other types of behavior, adversarial behavior is heavily structured as the location of a player (or agent) is dependent both on their teammates and adversaries, in addition to the tactics or strategies of the team. We present a method which can exploit this relationship through the use of a spatiotemporal basis model. As players constantly change roles during a match, we show that employing a "role-based" representation instead of one based on player "identity" can best exploit the playing structure. As vision-based systems currently do not provide perfect detection/tracking (e.g. missed or false detections), we show that our compact representation can effectively "denoise" erroneous detections as well as enabe temporal analysis, which was previously prohibitive due to the dimensionality of the signal. To evaluate our approach, we used a fully instrumented field-hockey pitch with 8 fixed high-definition (HD) cameras and evaluated our approach on approximately 200,000 frames of data from a state-of-the-art real-time player detector and compare it to manually labelled data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In most intent recognition studies, annotations of query intent are created post hoc by external assessors who are not the searchers themselves. It is important for the field to get a better understanding of the quality of this process as an approximation for determining the searcher's actual intent. Some studies have investigated the reliability of the query intent annotation process by measuring the interassessor agreement. However, these studies did not measure the validity of the judgments, that is, to what extent the annotations match the searcher's actual intent. In this study, we asked both the searchers themselves and external assessors to classify queries using the same intent classification scheme. We show that of the seven dimensions in our intent classification scheme, four can reliably be used for query annotation. Of these four, only the annotations on the topic and spatial sensitivity dimension are valid when compared with the searcher's annotations. The difference between the interassessor agreement and the assessor-searcher agreement was significant on all dimensions, showing that the agreement between external assessors is not a good estimator of the validity of the intent classifications. Therefore, we encourage the research community to consider using query intent classifications by the searchers themselves as test data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: The delivery of health care in the 21st century will look like no other in the past. The fast paced technological advances that are being made will need to transition from the information age into clinical practice. The phenomenon of e-Health is the over-arching form of information technology and telehealth is one arm of that phenomenon. The uptake of telehealth both in Australia and overseas, has changed the face of health service delivery to many rural and remote communities for the better, removing what is known as the tyranny of distance. Many studies have evaluated the satisfaction and cost-benefit analysis of telehealth across the organisational aspects as well as the various adaptations of clinical pathways and this is the predominant focus of most studies published to date. However, whilst comments have been made by many researchers about the need to improve and attend to the communication and relationship building aspects of telehealth no studies have examined this further. The aim of this study was to identify the patient and clinician experiences, concerns, behaviours and perceptions of the telehealth interaction and develop a training tool to assist these clinicians to improve their interaction skills. Methods: A mixed methods design combining quantitative (survey analysis and data coding) and qualitative (interview analysis) approaches was adopted. This study utilised four phases to firstly qualitatively explore the needs of clients (patients) and clinicians within a telehealth consultation then designed, developed, piloted and quantitatively and qualitatively evaluated the telehealth communication training program. Qualitative data was collected and analysed during Phase 1 of this study to describe and define the missing 'communication and rapport building' aspects within telehealth. This data was then utilised to develop a self-paced communication training program that enhanced clinicians existing skills, which comprised of Phase 2 of this study to develop the interactive program. Phase 3 included evaluating the training program with 26 clinicians and results were recorded pre and post training, whilst phase 4 was the pilot for future recommendations of this training program using a patient group within a Queensland Health setting at two rural hospitals. Results: Comparisons of pre and post training data on 1) Effective communication styles, 2) Involvement in communication training package, 3) satisfaction pre and post training, and 4) health outcomes pre and post training indicated that there were differences between pre and post training in relation to effective communication style, increased satisfaction and no difference in health outcomes between pre and post training for this patient group. The post training results revealed over half of the participants (N= 17, 65%) were more responsive to non-verbal cues and were better able to reflect and respond to looks of anxiousness and confusion from a 'patient' within a telehealth consultation. It was also found that during post training evaluations, clinicians had enhanced their therapeutic communication with greater detail to their own body postures, eye contact and presentation. There was greater time spent looking at the 'patient' with an increase of 35 second intervals of direct eye contact and less time spent looking down at paperwork which decreased by 20 seconds. Overall 73% of the clinicians were satisfied with the training program and 61% strongly agreed that they recognised areas of their communication that needed improving during a telehealth consultation. For the patient group there was significant difference post training in rapport with a mean score from 42 (SD = 28, n = 27) to 48 (SD = 5.9, n = 24). For communication comfort of the patient group there was a significant difference between the pre and post training scores t(10) = 27.9, p = .002, which meant that overall the patients felt less inhibited whilst talking to the clinicians and more understood. Conclusion: The aim of this study was to explore the characteristics of good patient-clinician communication and unmet training needs for telehealth consultations. The study developed a training program that was specific for telehealth consultations and not dependent on a 'trainer' to deliver the content. In light of the existing literature this is a first of its kind and a valuable contribution to the research on this topic. It was found that the training program was effective in improving the clinician's communication style and increased the satisfaction of patient's within an e-health environment. This study has identified some historical myths that telehealth cannot be part of empathic patient centred care due to its technology tag.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a model for the generation of a MAC tag using a stream cipher. The input message is used indirectly to control segments of the keystream that form the MAC tag. Several recent proposals can be considered as instances of this general model, as they all perform message accumulation in this way. However, they use slightly different processes in the message preparation and finalisation phases. We examine the security of this model for different options and against different types of attack, and conclude that the indirect injection model can be used to generate MAC tags securely for certain combinations of options. Careful consideration is required at the design stage to avoid combinations of options that result in susceptibility to forgery attacks. Additionally, some implementations may be vulnerable to side-channel attacks if used in Authenticated Encryption (AE) algorithms. We give design recommendations to provide resistance to these attacks for proposals following this model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many applications, where encrypted traffic flows from an open (public) domain to a protected (private) domain, there exists a gateway that bridges the two domains and faithfully forwards the incoming traffic to the receiver. We observe that indistinguishability against (adaptive) chosen-ciphertext attacks (IND-CCA), which is a mandatory goal in face of active attacks in a public domain, can be essentially relaxed to indistinguishability against chosen-plaintext attacks (IND-CPA) for ciphertexts once they pass the gateway that acts as an IND-CCA/CPA filter by first checking the validity of an incoming IND-CCA ciphertext, then transforming it (if valid) into an IND-CPA ciphertext, and forwarding the latter to the recipient in the private domain. “Non-trivial filtering'' can result in reduced decryption costs on the receivers' side. We identify a class of encryption schemes with publicly verifiable ciphertexts that admit generic constructions of (non-trivial) IND-CCA/CPA filters. These schemes are characterized by existence of public algorithms that can distinguish between valid and invalid ciphertexts. To this end, we formally define (non-trivial) public verifiability of ciphertexts for general encryption schemes, key encapsulation mechanisms, and hybrid encryption schemes, encompassing public-key, identity-based, and tag-based encryption flavours. We further analyze the security impact of public verifiability and discuss generic transformations and concrete constructions that enjoy this property.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It’s hard not to be somewhat cynical about the self-congratulatory ‘diversity’ at the centre of the growing calendar of art bi/tri-ennials. The –ennial has proven expedient to the global tourism circuit, keeping regional economies and a relatively moderate pool of transnational artists afloat and the Asia Pacific Triennial is no exception. The mediation of representation that is imperative to the ‘best of’ formats of these transnational art shows hinges on a categorical backwardness that can feel more than a little like a Miss World competition than a progressive art show because the little tag in parenthesis after each artist’s name seems just as politically precarious now as it did forty years ago. Despite a weighty corpus of practical and critical work to the contrary, identity politics are so intrinsic to art capitalization, for both artists and institutions, that extricating ourselves from the particular and strategic politics of identification is seemingly impossible. Not that everyone wants to of course.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We hypothesized that normal human mesothelial cells acquire resistance to asbestos-induced toxicity via induction of one or more epidermal growth factor receptor (EGFR) - linked survival pathways (phosphoinositol-3-kinase/AKT/ mammalian target of rapamycin and extracellular signal - regulated kinase [ERK] 1/2) during simian virus 40 (SV40) transformation and carcinogenesis. Both isolated HKNM-2 mesothelial cells and a telomerase-immortalized mesothelial line (LP9/TERT-1) were more sensitive to crocidolite asbestos toxicity than an SV40 Tag-immortalized mesothelial line (MET5A) and malignant mesothelioma cell lines (HMESO and PPM Mill). Whereas increases in phosphorylation of AKT (pAKT) were observed in MET5A cells in response to asbestos, LP9/TERT-1 cells exhibited dose-related decreases in pAKT levels. Pretreatment with an EGFR phosphorylation or mitogen-activated protein kinase kinase 1/2 inhibitor abrogated asbestos-induced phosphorylated ERK (pERK) 1/2 levels in both LP9/TERT-1 and MET5A cells as well as increases in pAKT levels in MET5A cells. Transient transfection of small interfering RNAs targeting ERK1, ERK2, or AKT revealed that ERK1/2 pathways were involved in cell death by asbestos in both cell lines. Asbestos-resistant HMESO or PPM Mill cells with high endogenous levels of ERKs or AKT did not show dose-responsive increases in pERK1/ERK1, pERK2/ERK2, or pAKT/AKT levels by asbestos. However, small hairpin ERK2 stable cell lines created from both malignant mesothelioma lines were more sensitive to asbestos toxicity than shERK1 and shControl lines, and exhibited unique, tumor-specific changes in endogenous cell death - related gene expression. Our results suggest that EGFR phosphorylation is causally linkedto pERK and pAKT activation by asbestos in normal and SV40 Tag - immortalized human mesothelial cells. They also indicate that ERK2 plays a role in modulating asbestos toxicity by regulating genes critical to cell injury and survival that are differentially expressed in human mesotheliomas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Acoustic sensors are increasingly used to monitor biodiversity. They can remain deployed in the environment for extended periods to passively and objectively record the sounds of the environment. The collected acoustic data must be analyzed to identify the presence of the sounds made by fauna in order to understand biodiversity. Citizen scientists play an important role in analyzing this data by annotating calls and identifying species. This paper presents our research into bioacoustic annotation techniques. It describes our work in defining a process for managing, creating, and using tags that are applied to our annotations. This paper includes a detailed description of our methodology for correcting and then linking our folksonomic tags to taxonomic data sources. Providing tools and processes for maintaining species naming consistency is critical to the success of a project designed to generate scientific data. We demonstrate that cleaning the folksonomic data and providing links to external taxonomic authorities enhances the scientific utility of the tagging efforts of citizen scientists.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Post-transcriptional silencing of plant genes using anti-sense or co-suppression constructs usually results in only a modest proportion of silenced individuals. Recent work has demonstrated the potential for constructs encoding self-complementary 'hairpin' RNA (hpRNA) to efficiently silence genes. In this study we examine design rules for efficient gene silencing, in terms of both the proportion of independent transgenic plants showing silencing, and the degree of silencing. Using hpRNA constructs containing sense/anti-sense arms ranging from 98 to 853 nt gave efficient silencing in a wide range of plant species, and inclusion of an intron in these constructs had a consistently enhancing effect. Intron-containing constructs (ihpRNA) generally gave 90-100% of independent transgenic plants showing silencing. The degree of silencing with these constructs was much greater than that obtained using either co-suppression or anti-sense constructs. We have made a generic vector, pHANNIBAL, that allows a simple, single PCR product from a gene of interest to be easily converted into a highly effective ihpRNA silencing construct. We have also created a high-throughput vector, pHELLSGATE, that should facilitate the cloning of gene libraries or large numbers of defined genes, such as those in EST collections, using an in vitro recombinase system. This system may facilitate the large-scale determination and discovery of plant gene functions in the same way as RNAi is being used to examine gene function in Caenorhabditis elegans.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Clustering identities in a broadcast video is a useful task to aid in video annotation and retrieval. Quality based frame selection is a crucial task in video face clustering, to both improve the clustering performance and reduce the computational cost. We present a frame work that selects the highest quality frames available in a video to cluster the face. This frame selection technique is based on low level and high level features (face symmetry, sharpness, contrast and brightness) to select the highest quality facial images available in a face sequence for clustering. We also consider the temporal distribution of the faces to ensure that selected faces are taken at times distributed throughout the sequence. Normalized feature scores are fused and frames with high quality scores are used in a Local Gabor Binary Pattern Histogram Sequence based face clustering system. We present a news video database to evaluate the clustering system performance. Experiments on the newly created news database show that the proposed method selects the best quality face images in the video sequence, resulting in improved clustering performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since its launch in 2006, Twitter has turned from a niche service to a mass phenomenon. By the beginning of 2013, the platform claims to have more than 200 million active users, who “post over 400 million tweets per day” (Twitter, 2013). Its success is spreading globally; Twitter is now available in 33 different languages, and has significantly increased its support for languages that use non-Latin character sets. While Twitter, Inc. has occasionally changed the appearance of the service and added new features—often in reaction to users’ developing their own conventions, such as adding ‘#’ in front of important keywords to tag them—the basic idea behind the service has stayed the same: users may post short messages (tweets) of up to 140 characters and follow the updates posted by other users. This leads to the formation of complex follower networks with unidirectional as well as bidirectional connections between individuals, but also between media outlets, NGOs, and other organisations. While originally ‘microblogs’ were perceived as a new genre of online communication, of which Twitter was just one exemplar, the platform has become synonymous with microblogging in most countries. A notable exception is Sina Weibo, popular in China where Twitter is not available. Other similar platforms have been shut down (e.g., Jaiku), or are being used in slightly different ways (e.g., Tumblr), thus making Twitter a unique service within the social media landscape.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new approach for recognizing the iris of the human eye is presented. Zero-crossings of the wavelet transform at various resolution levels are calculated over concentric circles on the iris, and the resulting one-dimensional (1-D) signals are compared with model features using different dissimilarity functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a study done into the effectiveness of using local acceleration measurements vs. remote angle measurements in providing stabilising control via SVCs following large disturbances. The system studied was an analogue of the Queensland-New South Wales Interconnection (QNI) and involved the control of an existing Static Var Compensators (SVC) at Sydney West. This study is placed in the context of wide area controls for large systems using aggregated models for groups of machines.