905 resultados para The Impossible Is Possible


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe an investigation into how Massey University’s Pollen Classifynder can accelerate the understanding of pollen and its role in nature. The Classifynder is an imaging microscopy system that can locate, image and classify slide based pollen samples. Given the laboriousness of purely manual image acquisition and identification it is vital to exploit assistive technologies like the Classifynder to enable acquisition and analysis of pollen samples. It is also vital that we understand the strengths and limitations of automated systems so that they can be used (and improved) to compliment the strengths and weaknesses of human analysts to the greatest extent possible. This article reviews some of our experiences with the Classifynder system and our exploration of alternative classifier models to enhance both accuracy and interpretability. Our experiments in the pollen analysis problem domain have been based on samples from the Australian National University’s pollen reference collection (2,890 grains, 15 species) and images bundled with the Classifynder system (400 grains, 4 species). These samples have been represented using the Classifynder image feature set.We additionally work through a real world case study where we assess the ability of the system to determine the pollen make-up of samples of New Zealand honey. In addition to the Classifynder’s native neural network classifier, we have evaluated linear discriminant, support vector machine, decision tree and random forest classifiers on these data with encouraging results. Our hope is that our findings will help enhance the performance of future releases of the Classifynder and other systems for accelerating the acquisition and analysis of pollen samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe an investigation into how Massey University's Pollen Classifynder can accelerate the understanding of pollen and its role in nature. The Classifynder is an imaging microscopy system that can locate, image and classify slide based pollen samples. Given the laboriousness of purely manual image acquisition and identification it is vital to exploit assistive technologies like the Classifynder to enable acquisition and analysis of pollen samples. It is also vital that we understand the strengths and limitations of automated systems so that they can be used (and improved) to compliment the strengths and weaknesses of human analysts to the greatest extent possible. This article reviews some of our experiences with the Classifynder system and our exploration of alternative classifier models to enhance both accuracy and interpretability. Our experiments in the pollen analysis problem domain have been based on samples from the Australian National University's pollen reference collection (2890 grains, 15 species) and images bundled with the Classifynder system (400 grains, 4 species). These samples have been represented using the Classifynder image feature set. In addition to the Classifynder's native neural network classifier, we have evaluated linear discriminant, support vector machine, decision tree and random forest classifiers on these data with encouraging results. Our hope is that our findings will help enhance the performance of future releases of the Classifynder and other systems for accelerating the acquisition and analysis of pollen samples. © 2013 AIP Publishing LLC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clustering is an important technique in organising and categorising web scale documents. The main challenges faced in clustering the billions of documents available on the web are the processing power required and the sheer size of the datasets available. More importantly, it is nigh impossible to generate the labels for a general web document collection containing billions of documents and a vast taxonomy of topics. However, document clusters are most commonly evaluated by comparison to a ground truth set of labels for documents. This paper presents a clustering and labeling solution where the Wikipedia is clustered and hundreds of millions of web documents in ClueWeb12 are mapped on to those clusters. This solution is based on the assumption that the Wikipedia contains such a wide range of diverse topics that it represents a small scale web. We found that it was possible to perform the web scale document clustering and labeling process on one desktop computer under a couple of days for the Wikipedia clustering solution containing about 1000 clusters. It takes longer to execute a solution with finer granularity clusters such as 10,000 or 50,000. These results were evaluated using a set of external data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Infectious diseases such as SARS, influenza and bird flu have the potential to cause global pandemics; a key intervention will be vaccination. Hence, it is imperative to have in place the capacity to create vaccines against new diseases in the shortest time possible. In 2004, The Institute of Medicine asserted that the world is tottering on the verge of a colossal influenza outbreak. The institute stated that, inadequate production system for influenza vaccines is a major obstruction in the preparation towards influenza outbreaks. Because of production issues, the vaccine industry is facing financial and technological bottlenecks: In October 2004, the FDA was caught off guard by the shortage of flu vaccine, caused by a contamination in a US-based plant (Chiron Corporation), one of the only two suppliers of US flu vaccine. Due to difficulties in production and long processing times, the bulk of the world's vaccine production comes from very small number of companies compared to the number of companies producing drugs. Conventional vaccines are made of attenuated or modified forms of viruses. Relatively high and continuous doses are administered when a non-viable vaccine is used and the overall protective immunity obtained is ephemeral. The safety concerns of viral vaccines have propelled interest in creating a viable replacement that would be more effective and safer to use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Successful biodiversity conservation requires safeguarding viable populations of species. To work with this challenge Sweden has introduced a concept of Action Plans, which focus on the recovery of one or more species; while keeping in mind the philosophy of addressing ecosystems in a more comprehensive way, following the umbrella concept. In this paper we investigate the implementationprocess of the ActionPlanfor one umbrella species, the White-backed Woodpecker (WBW) Dendrocopos leucotos. We describe the plan's organisation and goals, and investigate its implementation and accomplishment of particular targets, based on interviewing and surveying the key actors. The achievement of the targets in 2005-2008 was on average much lower than planned, explained partially by the lack of knowledge/data, experienced workers, and administrative flexibility. Surprisingly, the perceived importance of particular conservation measures, the investment priority accorded to them, the money available and various practical obstacles all failed to kg? explain the target levels achieved. However qualitative data from both the interviews and the survey highlight possible implementation obstacles: competing interests with other conservation actions and the level of engagement of particular implementing actors. Therefore we suggest that for successful implementation of recovery plans, there is aneed for initial and inclusive scoping prior to embarking on the plan, where not only issues like ecological knowledge and practical resources are considered, but also possible conflicts and synergies with other conservation actions. An adaptive approach with regular review of the conservation process is essential, particularly in the case of such complex action plans as the one for the WBW.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is often a gap between teaching beliefs and actual practice, between ‘what is valued and what is taught’ (Jones, 2009, p. 175). This may be particularly true when it comes to teaching creatively and teaching for creativity in higher education. This lack of congruence is not necessarily due to a lack of awareness about what is possible, or the desire to enact change in this domain. It may, however, be due to a mix of less easily manipulated contextual factors (environmental, socio-cultural, political and economic), and a lack of discourse (Jackson, 2006) around the problem...

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This chapter discusses the methodological aspects and empirical findings of a large-scale, funded project investigating public communication through social media in Australia. The project concentrates on Twitter, but we approach it as representative of broader current trends toward the integration of large datasets and computational methods into media and communication studies in general, and social media scholarship in particular. The research discussed in this chapter aims to empirically describe networks of affiliation and interest in the Australian Twittersphere, while reflecting on the methodological implications and imperatives of ‘big data’ in the humanities. Using custom network crawling technology, we have conducted a snowball crawl of Twitter accounts operated by Australian users to identify more than one million users and their follower/followee relationships, and have mapped their interconnections. In itself, the map provides an overview of the major clusters of densely interlinked users, largely centred on shared topics of interest (from politics through arts to sport) and/or sociodemographic factors (geographic origins, age groups). Our map of the Twittersphere is the first of its kind for the Australian part of the global Twitter network, and also provides a first independent and scholarly estimation of the size of the total Australian Twitter population. In combination with our investigation of participation patterns in specific thematic hashtags, the map also enables us to examine which areas of the underlying follower/followee network are activated in the discussion of specific current topics – allowing new insights into the extent to which particular topics and issues are of interest to specialised niches or to the Australian public more broadly. Specifically, we examine the Twittersphere footprint of dedicated political discussion, under the #auspol hashtag, and compare it with the heightened, broader interest in Australian politics during election campaigns, using #ausvotes; we explore the different patterns of Twitter activity across the map for major television events (the popular competitive cooking show #masterchef, the British #royalwedding, and the annual #stateoforigin Rugby League sporting contest); and we investigate the circulation of links to the articles published by a number of major Australian news organisations across the network. Such analysis, which combines the ‘big data’-informed map and a close reading of individual communicative phenomena, makes it possible to trace the dynamic formation and dissolution of issue publics against the backdrop of longer-term network connections, and the circulation of information across these follower/followee links. Such research sheds light on the communicative dynamics of Twitter as a space for mediated social interaction. Our work demonstrates the possibilities inherent in the current ‘computational turn’ (Berry, 2010) in the digital humanities, as well as adding to the development and critical examination of methodologies for dealing with ‘big data’ (boyd and Crawford, 2011). Out tools and methods for doing Twitter research, released under Creative Commons licences through our project Website, provide the basis for replicable and verifiable digital humanities research on the processes of public communication which take place through this important new social network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Acid hydrolysis is a popular pretreatment for removing hemicellulose from lignocelluloses in order to produce a digestible substrate for enzymatic saccharification. In this work, a novel model for the dilute acid hydrolysis of hemicellulose within sugarcane bagasse is presented and calibrated against experimental oligomer profiles. The efficacy of mathematical models as hydrolysis yield predictors and as vehicles for investigating the mechanisms of acid hydrolysis is also examined. Experimental xylose, oligomer (degree of polymerisation 2 to 6) and furfural yield profiles were obtained for bagasse under dilute acid hydrolysis conditions at temperatures ranging from 110C to 170C. Population balance kinetics, diffusion and porosity evolution were incorporated into a mathematical model of the acid hydrolysis of sugarcane bagasse. This model was able to produce a good fit to experimental xylose yield data with only three unknown kinetic parameters ka, kb and kd. However, fitting this same model to an expanded data set of oligomeric and furfural yield profiles did not successfully reproduce the experimental results. It was found that a ``hard-to-hydrolyse'' parameter, $\alpha$, was required in the model to ensure reproducibility of the experimental oligomer profiles at 110C, 125C and 140C. The parameters obtained through the fitting exercises at lower temperatures were able to be used to predict the oligomer profiles at 155C and 170C with promising results. The interpretation of kinetic parameters obtained by fitting a model to only a single set of data may be ambiguous. Although these parameters may correctly reproduce the data, they may not be indicative of the actual rate parameters, unless some care has been taken to ensure that the model describes the true mechanisms of acid hydrolysis. It is possible to challenge the robustness of the model by expanding the experimental data set and hence limiting the parameter space for the fitting parameters. The novel combination of ``hard-to-hydrolyse'' and population balance dynamics in the model presented here appears to stand up to such rigorous fitting constraints.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using cameras onboard a robot for detecting a coloured stationary target outdoors is a difficult task. Apart from the complexity of separating the target from the background scenery over different ranges, there are also the inconsistencies with direct and reflected illumination from the sun,clouds, moving and stationary objects. They can vary both the illumination on the target and its colour as perceived by the camera. In this paper, we analyse the effect of environment conditions, range to target, camera settings and image processing on the reported colours of various targets. The analysis indicates the colour space and camera configuration that provide the most consistent colour values over varying environment conditions and ranges. This information is used to develop a detection system that provides range and bearing to detected targets. The system is evaluated over various lighting conditions from bright sunlight, shadows and overcast days and demonstrates robust performance. The accuracy of the system is compared against a laser beacon detector with preliminary results indicating it to be a valuable asset for long-range coloured target detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mining industry is highly suitable for the application of robotics and automation technology since the work is both arduous and dangerous. However, while the industry makes extensive use of mechanisation it has shown a slow uptake of automation. A major cause of this is the complexity of the task, and the limitations of existing automation technology which is predicated on a structured and time invariant working environment. Here we discuss the topic of mining automation from a robotics and computer vision perspective — as a problem in sensor based robot control, an issue which the robotics community has been studying for nearly two decades. We then describe two of our current mining automation projects to demonstrate what is possible for both open-pit and underground mining operations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The consequences of falls are often dreadful for individuals with lower limb amputation using bone-anchored prosthesis.[1-5] Typically, the impact on the fixation is responsible for bending the intercutaneous piece that could lead to a complete breakage over time. .[3, 5-8] The surgical replacement of this piece is possible but complex and expensive. Clearly, there is a need for solid data enabling an evidence-based design of protective devices limiting impact forces and torsion applied during a fall. The impact on the fixation during an actual fall is obviously difficult to record during a scientific experiment.[6, 8-13] Consequently, Schwartze and colleagues opted for one of the next best options science has to offer: simulation with an able-bodied participant. They recorded body movements and knee impacts on the floor while mimicking several plausible falling scenarios. Then, they calculated the forces and moments that would be applied at four levels along the femur corresponding to amputation heights.[6, 8-11, 14-25] The overall forces applied during the falls were similar regardless of the amputation height indicating that the impact forces were simply translated along the femur. As expected, they showed that overall moments generally increased with amputation height due to changes in lever arm. This work demonstrates that devices preventing only against force overload do not require considering amputation height while those protecting against bending moments should. Another significant contribution is to provide, for the time, the magnitude of the impact load during different falls. This loading range is crucial to the overall design and, more precisely, the triggering threshold of protective devices. Unfortunately, the analysis of only a single able-bodied participant replicating falls limits greatly the generalisation of the findings. Nonetheless, this case study is an important milestone contributing to a better understanding of load impact during a fall. This new knowledge will improve the treatment, the safe ambulation and, ultimately, the quality of life of individuals fitted with bone-anchored prosthesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When a new form is inserted in an existing townscape, its consonance within the urban fabric is dependent on the level of attention paid to the evaluation and management of its architectural elements. However, despite the established principles and methods of urban morphology that enable the systematic analysis of the built environment, a formula for ensuring that new development relates to its context so as to achieve congruent outcomes is still lacking. This paper proposes a new method of evaluating and measuring architectural elements within evolving urban forms, with particular emphasis on a three-dimensional study of buildings. In a case study, detailed mapping of both current and past forms provides the basis for evincing predominant characteristics that have changed over time. Using this method, it is possible to demonstrate objectively how the townscape has been affected through changes in its architectural configuration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Individuals who fear falling may restrict themselves from performing certain activities and may increase their risk of falling. Such fear, reflected in the form of falls efficacy, has been measured in only a small number of studies measuring the effectiveness of exercise interventions in the elderly. This may be due to the various types of exercise that can be performed. Hence the effectiveness of exercise on falls efficacy is relatively understudied. Therefore, there is a need to measure falls efficacy as an outcome variable when conducting exercise interventions in the elderly. Methods: A total of 43 elderly community-dwelling volunteers were recruited and randomly allocated to a conventional exercise intervention, a holistic exercise intervention, or a control group. The interventions were performed 2 days per week for 10 weeks. Falls efficacy was measured at baseline and at the completion of the interventions using the Modified Falls Efficacy Scale (MFES). Results: Within group comparisons between baseline and follow-up indicated no significant improvements in falls efficacy, however, the difference for the conventional exercise group approached statistical significance (baseline 8.9 to follow-up 9.3; P = 0.058). Within group comparisons of mean difference MFES scores showed a significant difference between the conventional exercise group and the control group (conventional exercise group 0.4 vs control group −0.6; P < 0.05). Conclusion: Given the lack of significant improvements in falls efficacy found for any of the groups, it cannot be concluded whether a conventional or a holistic exercise intervention is the best approach for improving falls efficacy. It is possible that the characteristics of the exercise interventions including specificity, intensity, frequency and duration need to be manipulated if the purpose is to bring about improvements in falls efficacy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 2012 the Australian Commonwealth government was scheduled to release the first dedicated policy for culture and the arts since the Keating government's Creative Nation (1994). Investing in a Creative Australia was to appear after a lengthy period of consultation between the Commonwealth government and all interested cultural sectors and organisations. When it eventuates, the policy will be of particular interest to those information professionals working in the GLAM (galleries, libraries, archives and museums) environment. GLAM is a cross-institutional field which seeks to find points of commonality among various cultural-heritage institutions, while still recognising their points of difference. Digitisation, collaboration and convergence are key themes and characteristics of the GLAM sector and its associated theoretical discipline. The GLAM movement has seen many institutions seeking to work together to create networks of practice that are beneficial to the cultural-heritage industry and sector. With a new Australian cultural policy imminent, it is timely to reflect on the issues and challenges that GLAM principles present to national cultural-heritage institutions by discussing their current practices. In doing so, it is possible to suggest productive ways forward for these institutions which could then be supported at a policy level by the Commonwealth government. Specifically, this paper examines four institutions: the National Gallery of Australia, the National Library of Australia, the National Archives of Australia and the National Museum of Australia. The paper reflects on their responses to the Commonwealth's 2011 Cultural Policy Discussion Paper. It argues that by encouraging and supporting collecting institutions to participate more fully in GLAM practices the Commonwealth government's cultural policy would enable far greater public access to, and participation in, Australia's cultural heritage. Furthermore, by considering these four institutions, the paper presents a discussion of the challenges and the opportunities that GLAM theoretical and disciplinary principles present to the cultural-heritage sector. Implications for Best Practice * GLAM is a developing field of theory and practice that encompasses many issues and challenges for practitioners in this area. * GLAM principles and practices are increasingly influencing the cultural-heritage sector. * Cultural policy is a key element in shaping the future of Australia's cultural-heritage sector and needs to incorporate GLAM principles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The laz gene of Neisseria meningitidis is predicted to encode a lipid-modified azurin (Laz). Laz is very similar to azurin, a periplasmic protein, which belongs to the copper-containing proteins in the cupredoxin superfamily. In other bacteria, azurin is an electron donor to nitrite reductase, an important enzyme in the denitrifying process. It is not known whether Laz could function as an electron transfer protein in this important pathogen. Laz protein was heterologously expressed in Escherichia coli and purified. Electrospray mass spectrometry indicated that the Laz protein contains one copper ion. Laz was shown to be redox-active in the presence of its redox center copper ion. When oxidized, Laz exhibits an intense blue colour and absorbs visible light around 626 nm. The absorption is lost when exposed to diethyldithiocarbamate, a copper chelating agent. Polyclonal antibodies were raised against purified Laz for detecting expression of Laz under different growth conditions and to determine the orientation of Laz on the outer membrane. The expression of Laz under microaerobic and microaerobic denitrifying conditions was slightly higher than that under aerobic conditions. However, the expression of Laz was similar between the wild type strain and an fnr mutant, suggesting that Fumarate/Nitrate reduction regulator (FNR) does not regulate the expression of Laz despite the presence of a partial FNR box upstream of the laz gene. We propose that some Laz protein is exposed on the outer membrane surface of N. meningitidis as the αLaz antibodies can increase killing by complement in a capsule deficient N. meningitidis strain, in a dose-dependent fashion.