326 resultados para Large screens
Resumo:
Exhaust emissions were monitored in real-time at the kerb of a busy busway used by a mix of diesel and CNG-powered transport buses. Particle number concentration in the size range 3 nm to 3 µm was measured with a TSI condensation particle counter (CPC 3025). Particle mass (PM2.5) was measured with a TSI Dustrak 8520. The CO2 emissions were measured with a fast response CO2 analyser (Sable CA-10A). All emission concentrations were recorded in real time at 1 sec resolution, together with the precise passage times of buses. The instantaneous ratio of particle number (or mass) to CO2 concentration, denoted Z, was used as a measure of the particle number (or mass) emission factor of each passing bus.
Resumo:
Large arrays and networks of carbon nanotubes, both single- and multi-walled, feature many superior properties which offer excellent opportunities for various modern applications ranging from nanoelectronics, supercapacitors, photovoltaic cells, energy storage and conversation devices, to gas- and biosensors, nanomechanical and biomedical devices etc. At present, arrays and networks of carbon nanotubes are mainly fabricated from the pre-fabricated separated nanotubes by solution-based techniques. However, the intrinsic structure of the nanotubes (mainly, the level of the structural defects) which are required for the best performance in the nanotube-based applications, are often damaged during the array/network fabrication by surfactants, chemicals, and sonication involved in the process. As a result, the performance of the functional devices may be significantly degraded. In contrast, directly synthesized nanotube arrays/networks can preclude the adverse effects of the solution-based process and largely preserve the excellent properties of the pristine nanotubes. Owing to its advantages of scale-up production and precise positioning of the grown nanotubes, catalytic and catalyst-free chemical vapor depositions (CVD), as well as plasma-enhanced chemical vapor deposition (PECVD) are the methods most promising for the direct synthesis of the nanotubes.
Resumo:
This study reports on the utilisation of the Manchester Driver Behaviour Questionnaire (DBQ) to examine the self-reported driving behaviours of a large sample of Australian fleet drivers (N = 3414). Surveys were completed by employees before they commenced a one day safety workshop intervention. Factor analysis techniques identified a three factor solution similar to previous research, which was comprised of: (a) errors, (b) highway-code violations and (c) aggressive driving violations. Two items traditionally related with highway-code violations were found to be associated with aggressive driving behaviours among the current sample. Multivariate analyses revealed that exposure to the road, errors and self-reported offences predicted crashes at work in the last 12 months, while gender, highway violations and crashes predicted offences incurred while at work. Importantly, those who received more fines at work were at an increased risk of crashing the work vehicle. However, overall, the DBQ demonstrated limited efficacy at predicting these two outcomes. This paper outlines the major findings of the study in regards to identifying and predicting aberrant driving behaviours and also highlights implications regarding the future utilisation of the DBQ within fleet settings.
Resumo:
In elite sports, nearly all performances are captured on video. Despite the massive amounts of video that has been captured in this domain over the last 10-15 years, most of it remains in an 'unstructured' or 'raw' form, meaning it can only be viewed or manually annotated/tagged with higher-level event labels which is time consuming and subjective. As such, depending on the detail or depth of annotation, the value of the collected repositories of archived data is minimal as it does not lend itself to large-scale analysis and retrieval. One such example is swimming, where each race of a swimmer is captured on a camcorder and in-addition to the split-times (i.e., the time it takes for each lap), stroke rate and stroke-lengths are manually annotated. In this paper, we propose a vision-based system which effectively 'digitizes' a large collection of archived swimming races by estimating the location of the swimmer in each frame, as well as detecting the stroke rate. As the videos are captured from moving hand-held cameras which are located at different positions and angles, we show our hierarchical-based approach to tracking the swimmer and their different parts is robust to these issues and allows us to accurately estimate the swimmer location and stroke rates.
Resumo:
Background Many countries are scaling up malaria interventions towards elimination. This transition changes demands on malaria diagnostics from diagnosing ill patients to detecting parasites in all carriers including asymptomatic infections and infections with low parasite densities. Detection methods suitable to local malaria epidemiology must be selected prior to transitioning a malaria control programme to elimination. A baseline malaria survey conducted in Temotu Province, Solomon Islands in late 2008, as the first step in a provincial malaria elimination programme, provided malaria epidemiology data and an opportunity to assess how well different diagnostic methods performed in this setting. Methods During the survey, 9,491 blood samples were collected and examined by microscopy for Plasmodium species and density, with a subset also examined by polymerase chain reaction (PCR) and rapid diagnostic tests (RDTs). The performances of these diagnostic methods were compared. Results A total of 256 samples were positive by microscopy, giving a point prevalence of 2.7%. The species distribution was 17.5% Plasmodium falciparum and 82.4% Plasmodium vivax. In this low transmission setting, only 17.8% of the P. falciparum and 2.9% of P. vivax infected subjects were febrile (≥38°C) at the time of the survey. A significant proportion of infections detected by microscopy, 40% and 65.6% for P. falciparum and P. vivax respectively, had parasite density below 100/μL. There was an age correlation for the proportion of parasite density below 100/μL for P. vivax infections, but not for P. falciparum infections. PCR detected substantially more infections than microscopy (point prevalence of 8.71%), indicating a large number of subjects had sub-microscopic parasitemia. The concordance between PCR and microscopy in detecting single species was greater for P. vivax (135/162) compared to P. falciparum (36/118). The malaria RDT detected the 12 microscopy and PCR positive P. falciparum, but failed to detect 12/13 microscopy and PCR positive P. vivax infections. Conclusion Asymptomatic malaria infections and infections with low and sub-microscopic parasite densities are highly prevalent in Temotu province where malaria transmission is low. This presents a challenge for elimination since the large proportion of the parasite reservoir will not be detected by standard active and passive case detection. Therefore effective mass screening and treatment campaigns will most likely need more sensitive assays such as a field deployable molecular based assay.
Resumo:
Accurate three-dimensional representations of cultural heritage sites are highly valuable for scientific study, conservation, and educational purposes. In addition to their use for archival purposes, 3D models enable efficient and precise measurement of relevant natural and architectural features. Many cultural heritage sites are large and complex, consisting of multiple structures spatially distributed over tens of thousands of square metres. The process of effectively digitising such geometrically complex locations requires measurements to be acquired from a variety of viewpoints. While several technologies exist for capturing the 3D structure of objects and environments, none are ideally suited to complex, large-scale sites, mainly due to their limited coverage or acquisition efficiency. We explore the use of a recently developed handheld mobile mapping system called Zebedee in cultural heritage applications. The Zebedee system is capable of efficiently mapping an environment in three dimensions by continually acquiring data as an operator holding the device traverses through the site. The system was deployed at the former Peel Island Lazaret, a culturally significant site in Queensland, Australia, consisting of dozens of buildings of various sizes spread across an area of approximately 400 × 250 m. With the Zebedee system, the site was scanned in half a day, and a detailed 3D point cloud model (with over 520 million points) was generated from the 3.6 hours of acquired data in 2.6 hours. We present results demonstrating that Zebedee was able to accurately capture both site context and building detail comparable in accuracy to manual measurement techniques, and at a greatly increased level of efficiency and scope. The scan allowed us to record derelict buildings that previously could not be measured because of the scale and complexity of the site. The resulting 3D model captures both interior and exterior features of buildings, including structure, materials, and the contents of rooms.
Resumo:
In this paper we propose a novel scheme for carrying out speaker diarization in an iterative manner. We aim to show that the information obtained through the first pass of speaker diarization can be reused to refine and improve the original diarization results. We call this technique speaker rediarization and demonstrate the practical application of our rediarization algorithm using a large archive of two-speaker telephone conversation recordings. We use the NIST 2008 SRE summed telephone corpora for evaluating our speaker rediarization system. This corpus contains recurring speaker identities across independent recording sessions that need to be linked across the entire corpus. We show that our speaker rediarization scheme can take advantage of inter-session speaker information, linked in the initial diarization pass, to achieve a 30% relative improvement over the original diarization error rate (DER) after only two iterations of rediarization.
Resumo:
The polymorphism of human glutathione transferase hGSTT1-1 is expressed in three phenotypes. Experimentally, individuals can be classified as non-conjugators, low conjugators and 'high' conjugators depending on the enzyme activity in blood towards methylene chloride using a gas chromatographic assay. Non-conjugators do not have a functional hGSTT1 gene; however, little is known about the molecular basis of the three conjugator phenotypes. The higher hGSTT1-1 activity in high conjugators may be the result of enzyme induction or be genetically determined. Twenty-nine members of a large family, including three generations were phenotyped and genotyped with respect to hGSTT1-1. The hGSTT1-1 enzyme activity of high conjugators was twice as high as that of low conjugators. The distribution of hGSTT1-1 phenotypes strongly indicates a Mendelian intermediary inheritance, in which a gene-dosage effect results in a doubled enzyme expression in the presence of two functional alleles. The Mendelian intermediary inheritance is further supported by the findings of a semiquantitative polymerase chain reaction method designed to distinguish the three genotypes of hGSTT1 for rapid screening of large study groups.
Resumo:
The transfer of chemical vapor deposited graphene is a crucial process, which can affect the quality of the transferred films and compromise their application in devices. Finding a robust and intrinsically clean material capable of easing the transfer of graphene without interfering with its properties remains a challenge. We here propose the use of an organic compound, cyclododecane, as a transfer material. This material can be easily spin coated on graphene and assist the transfer, leaving no residues and requiring no further removal processes. The effectiveness of this transfer method for few-layer graphene on a large area was evaluated and confirmed by microscopy, Raman spectroscopy, x-ray photoemission spectroscopy, and four-point probe measurements. Schottky-barrier solar cells with few-layer graphene were fabricated on silicon wafers by using the cyclododecane transfer method and outperformed reference cells made by standard methods.
Resumo:
Although the collection of player and ball tracking data is fast becoming the norm in professional sports, large-scale mining of such spatiotemporal data has yet to surface. In this paper, given an entire season's worth of player and ball tracking data from a professional soccer league (approx 400,000,000 data points), we present a method which can conduct both individual player and team analysis. Due to the dynamic, continuous and multi-player nature of team sports like soccer, a major issue is aligning player positions over time. We present a "role-based" representation that dynamically updates each player's relative role at each frame and demonstrate how this captures the short-term context to enable both individual player and team analysis. We discover role directly from data by utilizing a minimum entropy data partitioning method and show how this can be used to accurately detect and visualize formations, as well as analyze individual player behavior.
Resumo:
Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making
Resumo:
As a key element in their response to new media forcing transformations in mass media and media use, newspapers have deployed various strategies to not only establish online and mobile products, and develop healthy business plans, but to set out to be dominant portals. Their response to change was the subject of an early investigation by one of the present authors (Keshvani 2000). That was part of a set of short studies inquiring into what impact new software applications and digital convergence might have on journalism practice (Tickle and Keshvani 2000), and also looking for demonstrations of the way that innovations, technologies and protocols then under development might produce a “wireless, streamlined electronic news production process (Tickle and Keshvani 2001).” The newspaper study compared the online products of The Age in Melbourne and the Straits Times in Singapore. It provided an audit of the Singapore and Australia Information and Communications Technology (ICT) climate concentrating on the state of development of carrier networks, as a determining factor in the potential strength of the two services with their respective markets. In the outcome, contrary to initial expectations, the early cable roll-out and extensive ‘wiring’ of the city in Singapore had not produced a level of uptake of Internet services as strong as that achieved in Melbourne by more ad hoc and varied strategies. By interpretation, while news websites and online content were at an early stage of development everywhere, and much the same as one another, no determining structural imbalance existed to separate these leading media participants in Australia and South-east Asia. The present research revisits that situation, by again studying the online editions of the two large newspapers in the original study, and one other, The Courier Mail, (recognising the diversification of types of product in this field, by including it as a representative of Newscorp, now a major participant). The inquiry works through the principle of comparison. It is an exercise in qualitative, empirical research that establishes a comparison between the situation in 2000 as described in the earlier work, and the situation in 2014, after a decade of intense development in digital technology affecting the media industries. It is in that sense a follow-up study on the earlier work, although this time giving emphasis to content and style of the actual products as experienced by their users. It compares the online and print editions of each of these three newspapers; then the three mastheads as print and online entities, among themselves; and finally it compares one against the other two, as representing a South-east Asian model and Australian models. This exercise is accompanied by a review of literature on the developments in ICT affecting media production and media organisations, to establish the changed context. The new study of the online editions is conducted as a systematic appraisal of the first level, or principal screens, of the three publications, over the course of six days (10-15.2.14 inclusive). For this, categories for analysis were made, through conducting a preliminary examination of the products over three days in the week before. That process identified significant elements of media production, such as: variegated sourcing of materials; randomness in the presentation of items; differential production values among media platforms considered, whether text, video or stills images; the occasional repurposing and repackaging of top news stories of the day and the presence of standard news values – once again drawn out of the trial ‘bundle’ of journalistic items. Reduced in this way the online artefacts become comparable with the companion print editions from the same days. The categories devised and then used in the appraisal of the online products have been adapted to print, to give the closest match of sets of variables. This device, to study the two sets of publications on like standards -- essentially production values and news values—has enabled the comparisons to be made. This comparing of the online and print editions of each of the three publications was set up as up the first step in the investigation. In recognition of the nature of the artefacts, as ones that carry very diverse information by subject and level of depth, and involve heavy creative investment in the formulation and presentation of the information; the assessment also includes an open section for interpreting and commenting on main points of comparison. This takes the form of a field for text, for the insertion of notes, in the table employed for summarising the features of each product, for each day. When the sets of comparisons as outlined above are noted, the process then becomes interpretative, guided by the notion of change. In the context of changing media technology and publication processes, what substantive alterations have taken place, in the overall effort of news organisations in the print and online fields since 2001; and in their print and online products separately? Have they diverged or continued along similar lines? The remaining task is to begin to make inferences from that. Will the examination of findings enforce the proposition that a review of the earlier study, and a forensic review of new models, does provide evidence of the character and content of change --especially change in journalistic products and practice? Will it permit an authoritative description on of the essentials of such change in products and practice? Will it permit generalisation, and provide a reliable base for discussion of the implications of change, and future prospects? Preliminary observations suggest a more dynamic and diversified product has been developed in Singapore, well themed, obviously sustained by public commitment and habituation to diversified online and mobile media services. The Australian products suggest a concentrated corporate and journalistic effort and deployment of resources, with a strong market focus, but less settled and ordered, and showing signs of limitations imposed by the delay in establishing a uniform, large broadband network. The scope of the study is limited. It is intended to test, and take advantage of the original study as evidentiary material from the early days of newspaper companies’ experimentation with online formats. Both are small studies. The key opportunity for discovery lies in the ‘time capsule’ factor; the availability of well-gathered and processed information on major newspaper company production, at the threshold of a transformational decade of change in their industry. The comparison stands to identify key changes. It should also be useful as a reference for further inquiries of the same kind that might be made, and for monitoring of the situation in regard to newspaper portals on line, into the future.
Resumo:
This research aims to explore and identify political risks on a large infrastructure project in an exaggerated environment to ascertain whether sufficient objective information can be gathered by project managers to utilise risk modelling techniques. During the study, the author proposes a new definition of political risk; performs a detailed project study of the Neelum Jhelum Hydroelectric Project in Pakistan; implements a probabilistic model using the principle of decomposition and Bayes probabilistic theorem and answers the question: was it possible for project managers to obtain all the relevant objective data to implement a probabilistic model?
Resumo:
There is strong current interest in the use of biodegradable scaffolds in combination with bone growth factors as a valuable alternative to the current gold standard autograft in spinal fusion surgery Yong et al. (2013). Here we report on 6- vs 12- month data set evaluating the longitudinal performance of a CaP coated polycaprolactone (PCL) scaffold loaded with recombinant human bone morphogenetic protein-2 (rhBMP-2) as a bone graft substitute within a preclinical ovine thoracic spine. The results of this study demonstrate the efficacy of scaffold-based delivery of rhBMP-2 in promoting higher fusion grades at 6- and 12- months in comparison to the scaffold alone or autograft group within the same time frame. Fusion grades achieved at six months using PCL+rhBMP-2 are not significantly increased at twelve months post surgery.