983 resultados para tag
Resumo:
This paper examines patterns of political activity and campaigning on Twitter in the context of the 2012 election in the Australian state of Queensland. Social media have been a visible component of political campaigning in Australia at least since the 2007 federal election, with Twitter, in particular, rising to greater prominence in the 2010 federal election. At state level, however, they have remained comparatively less important thus far. In this paper, we track uses of Twitter in the Queensland campaign from its unofficial start in February through to the election day of 24 March 2012. We both examine the overall patterns of activity in the hash tag #qldvotes, and track specific interactions between politicians and other users by following some 80 Twitter accounts of sitting members of parliament and alternative candidates. Such analysis provides new insights into the different approaches to social media campaigning which were embraced by specific candidates and party organisations, as well as an indication of the relative importance of social media activities, at present, for state-level election campaigns.
Resumo:
Availability has become a primary goal of information security and is as significant as other goals, in particular, confidentiality and integrity. Maintaining availability of essential services on the public Internet is an increasingly difficult task in the presence of sophisticated attackers. Attackers may abuse limited computational resources of a service provider and thus managing computational costs is a key strategy for achieving the goal of availability. In this thesis we focus on cryptographic approaches for managing computational costs, in particular computational effort. We focus on two cryptographic techniques: computational puzzles in cryptographic protocols and secure outsourcing of cryptographic computations. This thesis contributes to the area of cryptographic protocols in the following ways. First we propose the most efficient puzzle scheme based on modular exponentiations which, unlike previous schemes of the same type, involves only a few modular multiplications for solution verification; our scheme is provably secure. We then introduce a new efficient gradual authentication protocol by integrating a puzzle into a specific signature scheme. Our software implementation results for the new authentication protocol show that our approach is more efficient and effective than the traditional RSA signature-based one and improves the DoSresilience of Secure Socket Layer (SSL) protocol, the most widely used security protocol on the Internet. Our next contributions are related to capturing a specific property that enables secure outsourcing of cryptographic tasks in partial-decryption. We formally define the property of (non-trivial) public verifiability for general encryption schemes, key encapsulation mechanisms (KEMs), and hybrid encryption schemes, encompassing public-key, identity-based, and tag-based encryption avors. We show that some generic transformations and concrete constructions enjoy this property and then present a new public-key encryption (PKE) scheme having this property and proof of security under the standard assumptions. Finally, we combine puzzles with PKE schemes for enabling delayed decryption in applications such as e-auctions and e-voting. For this we first introduce the notion of effort-release PKE (ER-PKE), encompassing the well-known timedrelease encryption and encapsulated key escrow techniques. We then present a security model for ER-PKE and a generic construction of ER-PKE complying with our security notion.
Resumo:
The giant freshwater prawn (Macrobrachium rosenbergii) or GFP is one of the most important freshwater crustacean species in the inland aquaculture sector of many tropical and subtropical countries. Since the 1990’s, there has been rapid global expansion of freshwater prawn farming, especially in Asian countries, with an average annual rate of increase of 48% between 1999 and 2001 (New, 2005). In Vietnam, GFP is cultured in a variety of culture systems, typically in integrated or rotational rice-prawn culture (Phuong et al., 2006) and has become one of the most common farmed aquatic species in the country, due to its ability to grow rapidly and to attract high market price and high demand. Despite potential for expanded production, sustainability of freshwater prawn farming in the region is currently threatened by low production efficiency and vulnerability of farmed stocks to disease. Commercial large scale and small scale GFP farms in Vietnam have experienced relatively low stock productivity, large size and weight variation, a low proportion of edible meat (large head to body ratio), scarcity of good quality seed stock. The current situation highlights the need for a systematic stock improvement program for GFP in Vietnam aimed at improving economically important traits in this species. This study reports on the breeding program for fast growth employing combined (between and within) family selection in giant freshwater prawn in Vietnam. The base population was synthesized using a complete diallel cross including 9 crosses from two local stocks (DN and MK strains) and a third exotic stock (Malaysian strain - MY). In the next three selection generations, matings were conducted between genetically unrelated brood stock to produce full-sib and (paternal) half-sib families. All families were produced and reared separately until juveniles in each family were tagged as a batch using visible implant elastomer (VIE) at a body size of approximately 2 g. After tags were verified, 60 to 120 juveniles chosen randomly from each family were released into two common earthen ponds of 3,500 m2 pond for a grow-out period of 16 to 18 weeks. Selection applied at harvest on body weight was a combined (between and within) family selection approach. 81, 89, 96 and 114 families were produced for the Selection line in the F0, F1, F2 and F3 generations, respectively. In addition to the Selection line, 17 to 42 families were produced for the Control group in each generation. Results reported here are based on a data set consisting of 18,387 body and 1,730 carcass records, as well as full pedigree information collected over four generations. Variance and covariance components were estimated by restricted maximum likelihood fitting a multi-trait animal model. Experiments assessed performance of VIE tags in juvenile GFP of different size classes and individuals tagged with different numbers of tags showed that juvenile GFP at 2 g were of suitable size for VIE tags with no negative effects evident on growth or survival. Tag retention rates were above 97.8% and tag readability rates were 100% with a correct assignment rate of 95% through to mature animal size of up to 170 g. Across generations, estimates of heritability for body traits (body weight, body length, cephalothorax length, abdominal length, cephalothorax width and abdominal width) and carcass weight traits (abdominal weight, skeleton-off weight and telson-off weight) were moderate and ranged from 0.14 to 0.19 and 0.17 to 0.21, respectively. Body trait heritabilities estimated for females were significantly higher than for males whereas carcass weight trait heritabilities estimated for females and males were not significantly different (P > 0.05). Maternal and common environmental effects for body traits accounted for 4 to 5% of the total variance and were greater in females (7 to 10%) than in males (4 to 5%). Genetic correlations among body traits were generally high in both sexes. Genetic correlations between body and carcass weight traits were also high in the mixed sexes. Average selection response (% per generation) for body weight (transformed to square root) estimated as the difference between the Selection and the Control group was 7.4% calculated from least squares means (LSMs), 7.0% from estimated breeding values (EBVs) and 4.4% calculated from EBVs between two consecutive generations. Favourable correlated selection responses (estimated from LSMs) were detected for other body traits (12.1%, 14.5%, 10.4%, 15.5% and 13.3% for body length, cephalothorax length, abdominal length, cephalothorax width and abdominal width, respectively) over three selection generations. Data in the second selection generation showed positive correlated responses for carcass weight traits (8.8%, 8.6% and 8.8% for abdominal weight, skeleton-off weight and telson-off weight, respectively). Data in the third selection generation showed that heritability for body traits were moderate and ranged from 0.06 to 0.11 and 0.11 to 0.22 at weeks 10 and 18, respectively. Body trait heritabilities estimated at week 10 were not significantly lower than at week 18. Genetic correlations between body traits within age and genetic correlations for body traits between ages were generally high. Overall our results suggest that growth rate responds well to the application of family selection and carcass weight traits can also be improved in parallel, using this approach. Moreover, selection for high growth rate in GFP can be undertaken successfully before full market size has been reached. The outcome of this study was production of an improved culture strain of GFP for the Vietnamese culture industry that will be trialed in real farm production environments to confirm the genetic gains identified in the experimental stock improvement program.
Resumo:
Prior to the completion of the human genome project, the human genome was thought to have a greater number of genes as it seemed structurally and functionally more complex than other simpler organisms. This along with the belief of “one gene, one protein”, were demonstrated to be incorrect. The inequality in the ratio of gene to protein formation gave rise to the theory of alternative splicing (AS). AS is a mechanism by which one gene gives rise to multiple protein products. Numerous databases and online bioinformatic tools are available for the detection and analysis of AS. Bioinformatics provides an important approach to study mRNA and protein diversity by various tools such as expressed sequence tag (EST) sequences obtained from completely processed mRNA. Microarrays and deep sequencing approaches also aid in the detection of splicing events. Initially it was postulated that AS occurred only in about 5%; of all genes but was later found to be more abundant. Using bioinformatic approaches, the level of AS in human genes was found to be fairly high with 35-59%; of genes having at least one AS form. Our ability to determine and predict AS is important as disorders in splicing patterns may lead to abnormal splice variants resulting in genetic diseases. In addition, the diversity of proteins produced by AS poses a challenge for successful drug discovery and therefore a greater understanding of AS would be beneficial.
Resumo:
We have explored the potential of deep Raman spectroscopy, specifically surface enhanced spatially offset Raman spectroscopy (SESORS), for non-invasive detection from within animal tissue, by employing SERS-barcoded nanoparticle (NP) assemblies as the diagnostic agent. This concept has been experimentally verified in a clinic-relevant backscattered Raman system with an excitation line of 785 nm under ex vivo conditions. We have shown that our SORS system, with a fixed offset of 2-3 mm, offered sensitive probing of injected QTH-barcoded NP assemblies through animal tissue containing both protein and lipid. In comparison to that of non-aggregated SERS-barcoded gold NPs, we have demonstrated that the tailored SERS-barcoded aggregated NP assemblies have significantly higher detection sensitivity. We report that these NP assemblies can be readily detected at depths of 7-8 mm from within animal proteinaceous tissue with high signal-to-noise (S/N) ratio. In addition they could also be detected from beneath 1-2 mm of animal tissue with high lipid content, which generally poses a challenge due to high absorption of lipids in the near-infrared region. We have also shown that the signal intensity and S/N ratio at a particular depth is a function of the SERS tag concentration used and that our SORS system has a QTH detection limit of 10-6 M. Higher detection depths may possibly be obtained with optimization of the NP assemblies, along with improvements in the instrumentation. Such NP assemblies offer prospects for in vivo, non-invasive detection of tumours along with scope for incorporation of drugs and their targeted and controlled release at tumour sites. These diagnostic agents combined with drug delivery systems could serve as a “theranostic agent”, an integration of diagnostics and therapeutics into a single platform.
Resumo:
Dwell time at the busway station has a significant effect on bus capacity and delay. Dwell time has conventionally been estimated using models developed on the basis of field survey data. However field survey is resource and cost intensive, so dwell time estimation based on limited observations can be somewhat inaccurate. Most public transport systems are now equipped with Automatic Passenger Count (APC) and/or Automatic Fare Collection (AFC) systems. AFC in particular reduces on-board ticketing time, driver’s work load and ultimately reduces bus dwell time. AFC systems can record all passenger transactions providing transit agencies with access to vast quantities of data. AFC data provides transaction timestamps, however this information differs from dwell time because passengers may tag on or tag off at times other than when doors open and close. This research effort contended that models could be developed to reliably estimate dwell time distributions when measured distributions of transaction times are known. Development of the models required calibration and validation using field survey data of actual dwell times, and an appreciation of another component of transaction time being bus time in queue. This research develops models for a peak period and off peak period at a busway station on the South East Busway (SEB) in Brisbane, Australia.
Resumo:
In this paper, we describe a method to represent and discover adversarial group behavior in a continuous domain. In comparison to other types of behavior, adversarial behavior is heavily structured as the location of a player (or agent) is dependent both on their teammates and adversaries, in addition to the tactics or strategies of the team. We present a method which can exploit this relationship through the use of a spatiotemporal basis model. As players constantly change roles during a match, we show that employing a "role-based" representation instead of one based on player "identity" can best exploit the playing structure. As vision-based systems currently do not provide perfect detection/tracking (e.g. missed or false detections), we show that our compact representation can effectively "denoise" erroneous detections as well as enabe temporal analysis, which was previously prohibitive due to the dimensionality of the signal. To evaluate our approach, we used a fully instrumented field-hockey pitch with 8 fixed high-definition (HD) cameras and evaluated our approach on approximately 200,000 frames of data from a state-of-the-art real-time player detector and compare it to manually labelled data.
Resumo:
Introduction: The delivery of health care in the 21st century will look like no other in the past. The fast paced technological advances that are being made will need to transition from the information age into clinical practice. The phenomenon of e-Health is the over-arching form of information technology and telehealth is one arm of that phenomenon. The uptake of telehealth both in Australia and overseas, has changed the face of health service delivery to many rural and remote communities for the better, removing what is known as the tyranny of distance. Many studies have evaluated the satisfaction and cost-benefit analysis of telehealth across the organisational aspects as well as the various adaptations of clinical pathways and this is the predominant focus of most studies published to date. However, whilst comments have been made by many researchers about the need to improve and attend to the communication and relationship building aspects of telehealth no studies have examined this further. The aim of this study was to identify the patient and clinician experiences, concerns, behaviours and perceptions of the telehealth interaction and develop a training tool to assist these clinicians to improve their interaction skills. Methods: A mixed methods design combining quantitative (survey analysis and data coding) and qualitative (interview analysis) approaches was adopted. This study utilised four phases to firstly qualitatively explore the needs of clients (patients) and clinicians within a telehealth consultation then designed, developed, piloted and quantitatively and qualitatively evaluated the telehealth communication training program. Qualitative data was collected and analysed during Phase 1 of this study to describe and define the missing 'communication and rapport building' aspects within telehealth. This data was then utilised to develop a self-paced communication training program that enhanced clinicians existing skills, which comprised of Phase 2 of this study to develop the interactive program. Phase 3 included evaluating the training program with 26 clinicians and results were recorded pre and post training, whilst phase 4 was the pilot for future recommendations of this training program using a patient group within a Queensland Health setting at two rural hospitals. Results: Comparisons of pre and post training data on 1) Effective communication styles, 2) Involvement in communication training package, 3) satisfaction pre and post training, and 4) health outcomes pre and post training indicated that there were differences between pre and post training in relation to effective communication style, increased satisfaction and no difference in health outcomes between pre and post training for this patient group. The post training results revealed over half of the participants (N= 17, 65%) were more responsive to non-verbal cues and were better able to reflect and respond to looks of anxiousness and confusion from a 'patient' within a telehealth consultation. It was also found that during post training evaluations, clinicians had enhanced their therapeutic communication with greater detail to their own body postures, eye contact and presentation. There was greater time spent looking at the 'patient' with an increase of 35 second intervals of direct eye contact and less time spent looking down at paperwork which decreased by 20 seconds. Overall 73% of the clinicians were satisfied with the training program and 61% strongly agreed that they recognised areas of their communication that needed improving during a telehealth consultation. For the patient group there was significant difference post training in rapport with a mean score from 42 (SD = 28, n = 27) to 48 (SD = 5.9, n = 24). For communication comfort of the patient group there was a significant difference between the pre and post training scores t(10) = 27.9, p = .002, which meant that overall the patients felt less inhibited whilst talking to the clinicians and more understood. Conclusion: The aim of this study was to explore the characteristics of good patient-clinician communication and unmet training needs for telehealth consultations. The study developed a training program that was specific for telehealth consultations and not dependent on a 'trainer' to deliver the content. In light of the existing literature this is a first of its kind and a valuable contribution to the research on this topic. It was found that the training program was effective in improving the clinician's communication style and increased the satisfaction of patient's within an e-health environment. This study has identified some historical myths that telehealth cannot be part of empathic patient centred care due to its technology tag.
Resumo:
This paper presents a model for the generation of a MAC tag using a stream cipher. The input message is used indirectly to control segments of the keystream that form the MAC tag. Several recent proposals can be considered as instances of this general model, as they all perform message accumulation in this way. However, they use slightly different processes in the message preparation and finalisation phases. We examine the security of this model for different options and against different types of attack, and conclude that the indirect injection model can be used to generate MAC tags securely for certain combinations of options. Careful consideration is required at the design stage to avoid combinations of options that result in susceptibility to forgery attacks. Additionally, some implementations may be vulnerable to side-channel attacks if used in Authenticated Encryption (AE) algorithms. We give design recommendations to provide resistance to these attacks for proposals following this model.
Resumo:
In many applications, where encrypted traffic flows from an open (public) domain to a protected (private) domain, there exists a gateway that bridges the two domains and faithfully forwards the incoming traffic to the receiver. We observe that indistinguishability against (adaptive) chosen-ciphertext attacks (IND-CCA), which is a mandatory goal in face of active attacks in a public domain, can be essentially relaxed to indistinguishability against chosen-plaintext attacks (IND-CPA) for ciphertexts once they pass the gateway that acts as an IND-CCA/CPA filter by first checking the validity of an incoming IND-CCA ciphertext, then transforming it (if valid) into an IND-CPA ciphertext, and forwarding the latter to the recipient in the private domain. “Non-trivial filtering'' can result in reduced decryption costs on the receivers' side. We identify a class of encryption schemes with publicly verifiable ciphertexts that admit generic constructions of (non-trivial) IND-CCA/CPA filters. These schemes are characterized by existence of public algorithms that can distinguish between valid and invalid ciphertexts. To this end, we formally define (non-trivial) public verifiability of ciphertexts for general encryption schemes, key encapsulation mechanisms, and hybrid encryption schemes, encompassing public-key, identity-based, and tag-based encryption flavours. We further analyze the security impact of public verifiability and discuss generic transformations and concrete constructions that enjoy this property.
Resumo:
It’s hard not to be somewhat cynical about the self-congratulatory ‘diversity’ at the centre of the growing calendar of art bi/tri-ennials. The –ennial has proven expedient to the global tourism circuit, keeping regional economies and a relatively moderate pool of transnational artists afloat and the Asia Pacific Triennial is no exception. The mediation of representation that is imperative to the ‘best of’ formats of these transnational art shows hinges on a categorical backwardness that can feel more than a little like a Miss World competition than a progressive art show because the little tag in parenthesis after each artist’s name seems just as politically precarious now as it did forty years ago. Despite a weighty corpus of practical and critical work to the contrary, identity politics are so intrinsic to art capitalization, for both artists and institutions, that extricating ourselves from the particular and strategic politics of identification is seemingly impossible. Not that everyone wants to of course.
Resumo:
We hypothesized that normal human mesothelial cells acquire resistance to asbestos-induced toxicity via induction of one or more epidermal growth factor receptor (EGFR) - linked survival pathways (phosphoinositol-3-kinase/AKT/ mammalian target of rapamycin and extracellular signal - regulated kinase [ERK] 1/2) during simian virus 40 (SV40) transformation and carcinogenesis. Both isolated HKNM-2 mesothelial cells and a telomerase-immortalized mesothelial line (LP9/TERT-1) were more sensitive to crocidolite asbestos toxicity than an SV40 Tag-immortalized mesothelial line (MET5A) and malignant mesothelioma cell lines (HMESO and PPM Mill). Whereas increases in phosphorylation of AKT (pAKT) were observed in MET5A cells in response to asbestos, LP9/TERT-1 cells exhibited dose-related decreases in pAKT levels. Pretreatment with an EGFR phosphorylation or mitogen-activated protein kinase kinase 1/2 inhibitor abrogated asbestos-induced phosphorylated ERK (pERK) 1/2 levels in both LP9/TERT-1 and MET5A cells as well as increases in pAKT levels in MET5A cells. Transient transfection of small interfering RNAs targeting ERK1, ERK2, or AKT revealed that ERK1/2 pathways were involved in cell death by asbestos in both cell lines. Asbestos-resistant HMESO or PPM Mill cells with high endogenous levels of ERKs or AKT did not show dose-responsive increases in pERK1/ERK1, pERK2/ERK2, or pAKT/AKT levels by asbestos. However, small hairpin ERK2 stable cell lines created from both malignant mesothelioma lines were more sensitive to asbestos toxicity than shERK1 and shControl lines, and exhibited unique, tumor-specific changes in endogenous cell death - related gene expression. Our results suggest that EGFR phosphorylation is causally linkedto pERK and pAKT activation by asbestos in normal and SV40 Tag - immortalized human mesothelial cells. They also indicate that ERK2 plays a role in modulating asbestos toxicity by regulating genes critical to cell injury and survival that are differentially expressed in human mesotheliomas.
Resumo:
Post-transcriptional silencing of plant genes using anti-sense or co-suppression constructs usually results in only a modest proportion of silenced individuals. Recent work has demonstrated the potential for constructs encoding self-complementary 'hairpin' RNA (hpRNA) to efficiently silence genes. In this study we examine design rules for efficient gene silencing, in terms of both the proportion of independent transgenic plants showing silencing, and the degree of silencing. Using hpRNA constructs containing sense/anti-sense arms ranging from 98 to 853 nt gave efficient silencing in a wide range of plant species, and inclusion of an intron in these constructs had a consistently enhancing effect. Intron-containing constructs (ihpRNA) generally gave 90-100% of independent transgenic plants showing silencing. The degree of silencing with these constructs was much greater than that obtained using either co-suppression or anti-sense constructs. We have made a generic vector, pHANNIBAL, that allows a simple, single PCR product from a gene of interest to be easily converted into a highly effective ihpRNA silencing construct. We have also created a high-throughput vector, pHELLSGATE, that should facilitate the cloning of gene libraries or large numbers of defined genes, such as those in EST collections, using an in vitro recombinase system. This system may facilitate the large-scale determination and discovery of plant gene functions in the same way as RNAi is being used to examine gene function in Caenorhabditis elegans.
Resumo:
Since its launch in 2006, Twitter has turned from a niche service to a mass phenomenon. By the beginning of 2013, the platform claims to have more than 200 million active users, who “post over 400 million tweets per day” (Twitter, 2013). Its success is spreading globally; Twitter is now available in 33 different languages, and has significantly increased its support for languages that use non-Latin character sets. While Twitter, Inc. has occasionally changed the appearance of the service and added new features—often in reaction to users’ developing their own conventions, such as adding ‘#’ in front of important keywords to tag them—the basic idea behind the service has stayed the same: users may post short messages (tweets) of up to 140 characters and follow the updates posted by other users. This leads to the formation of complex follower networks with unidirectional as well as bidirectional connections between individuals, but also between media outlets, NGOs, and other organisations. While originally ‘microblogs’ were perceived as a new genre of online communication, of which Twitter was just one exemplar, the platform has become synonymous with microblogging in most countries. A notable exception is Sina Weibo, popular in China where Twitter is not available. Other similar platforms have been shut down (e.g., Jaiku), or are being used in slightly different ways (e.g., Tumblr), thus making Twitter a unique service within the social media landscape.
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.