896 resultados para Visualisation spatio-temporelle
Resumo:
Much of our understanding and management of ecological processes requires knowledge of the distribution and abundance of species. Reliable abundance or density estimates are essential for managing both threatened and invasive populations, yet are often challenging to obtain. Recent and emerging technological advances, particularly in unmanned aerial vehicles (UAVs), provide exciting opportunities to overcome these challenges in ecological surveillance. UAVs can provide automated, cost-effective surveillance and offer repeat surveys for pest incursions at an invasion front. They can capitalise on manoeuvrability and advanced imagery options to detect species that are cryptic due to behaviour, life-history or inaccessible habitat. UAVs may also cause less disturbance, in magnitude and duration, for sensitive fauna than other survey methods such as transect counting by humans or sniffer dogs. The surveillance approach depends upon the particular ecological context and the objective. For example, animal, plant and microbial target species differ in their movement, spread and observability. Lag-times may exist between a pest species presence at a site and its detectability, prompting a need for repeat surveys. Operationally, however, the frequency and coverage of UAV surveys may be limited by financial and other constraints, leading to errors in estimating species occurrence or density. We use simulation modelling to investigate how movement ecology should influence fine-scale decisions regarding ecological surveillance using UAVs. Movement and dispersal parameter choices allow contrasts between locally mobile but slow-dispersing populations, and species that are locally more static but invasive at the landscape scale. We find that low and slow UAV flights may offer the best monitoring strategy to predict local population densities in transects, but that the consequent reduction in overall area sampled may sacrifice the ability to reliably predict regional population density. Alternative flight plans may perform better, but this is also dependent on movement ecology and the magnitude of relative detection errors for different flight choices. Simulated investigations such as this will become increasingly useful to reveal how spatio-temporal extent and resolution of UAV monitoring should be adjusted to reduce observation errors and thus provide better population estimates, maximising the efficacy and efficiency of unmanned aerial surveys.
Resumo:
Self-tracking, the process of recording one's own behaviours, thoughts and feelings, is a popular approach to enhance one's self-knowledge. While dedicated self-tracking apps and devices support data collection, previous research highlights that the integration of data constitutes a barrier for users. In this study we investigated how members of the Quantified Self movement---early adopters of self-tracking tools---overcome these barriers. We conducted a qualitative analysis of 51 videos of Quantified Self presentations to explore intentions for collecting data, methods for integrating and representing data, and how intentions and methods shaped reflection. The findings highlight two different intentions---striving for self-improvement and curiosity in personal data---which shaped how these users integrated data, i.e. the effort required. Furthermore, we identified three methods for representing data---binary, structured and abstract---which influenced reflection. Binary representations supported reflection-in-action, whereas structured and abstract representations supported iterative processes of data collection, integration and reflection. For people tracking out of curiosity, this iterative engagement with personal data often became an end in itself, rather than a means to achieve a goal. We discuss how these findings contribute to our current understanding of self-tracking amongst Quantified Self members and beyond, and we conclude with directions for future work to support self-trackers with their aspirations.
Resumo:
BACKGROUND Chikungunya and dengue infections are spatio-temporally related. The current review aims to determine the geographic limits of chikungunya, dengue and the principal mosquito vectors for both viruses and to synthesise current epidemiological understanding of their co-distribution. METHODS Three biomedical databases (PubMed, Scopus and Web of Science) were searched from their inception until May 2015 for studies that reported concurrent detection of chikungunya and dengue viruses in the same patient. Additionally, data from WHO, CDC and Healthmap alerts were extracted to create up-to-date global distribution maps for both dengue and chikungunya. RESULTS Evidence for chikungunya-dengue co-infection has been found in Angola, Gabon, India, Madagascar, Malaysia, Myanmar, Nigeria, Saint Martin, Singapore, Sri Lanka, Tanzania, Thailand and Yemen; these constitute only 13 out of the 98 countries/territories where both chikungunya and dengue epidemic/endemic transmission have been reported. CONCLUSIONS Understanding the true extent of chikungunya-dengue co-infection is hampered by current diagnosis largely based on their similar symptoms. Heightened awareness of chikungunya among the public and public health practitioners in the advent of the ongoing outbreak in the Americas can be expected to improve diagnostic rigour. Maps generated from the newly compiled lists of the geographic distribution of both pathogens and vectors represent the current geographical limits of chikungunya and dengue, as well as the countries/territories at risk of future incursion by both viruses. These describe regions of co-endemicity in which lab-based diagnosis of suspected cases is of higher priority.
Resumo:
This PhD research has proposed new machine learning techniques to improve human action recognition based on local features. Several novel video representation and classification techniques have been proposed to increase the performance with lower computational complexity. The major contributions are the construction of new feature representation techniques, based on advanced machine learning techniques such as multiple instance dictionary learning, Latent Dirichlet Allocation (LDA) and Sparse coding. A Binary-tree based classification technique was also proposed to deal with large amounts of action categories. These techniques are not only improving the classification accuracy with constrained computational resources but are also robust to challenging environmental conditions. These developed techniques can be easily extended to a wide range of video applications to provide near real-time performance.
Resumo:
An analysis of inviscid incompressible flow in a tube of sinusoidally perturbed circular cross section with wall injection has been made. The velocity and pressure fields have been obtained. Measurements of axial velocity profiles and pressure distribution have been made in a simulated star shaped tube with wall injection. The static pressure at the star recess is found to be more than that at the star point, this feature being in conformity with the analytical result. Flow visualisation by photography of injected smoke seems to show simple diffusion rather than strong vortices in the recess.
Resumo:
Reductionist thinking will no longer suffice to address contemporary, complex challenges that defy sectoral, national, or disciplinary boundaries. Furthermore, lessons learned from the past cannot be confidently used to predict outcomes or help guide future actions. The authors propose that the confluence of a number of technology and social disruptors presents a pivotal moment in history to enable real-time, accelerated and integrated action that can adequately support a ‘future earth’ through transformational solutions. Building on more than a decade of dialogues hosted by the International Society for Digital Earth (ISDE), and evolving a briefing note presented to delegates of Pivotal2015, the paper presents an emergent context for collectively addressing spatial information, sustainable development and good governance through three guiding principles for enabling prosperous living in the 21st Century. These are: (1) open data, (2) real world context and (3) informed visualization for decision support. The paper synthesizes an interdisciplinary dialogue to create a credible and positive future vision of collaborative and transparent action for the betterment of humanity and planet. It is intended that the three Pivotal Principles can be used as an elegant framework for action towards the Digital Earth vision, across local, regional, and international communities and organizations.
Resumo:
K-Cl cotransporter 2 (KCC2) maintains a low intracellular Cl concentration required for fast hyperpolarizing responses of neurons to classical inhibitory neurotransmitters γ-aminobutyric acid (GABA) and glycine. Decreased Cl extrusion observed in genetically modified KCC2-deficient mice leads to depolarizing GABA responses, impaired brain inhibition, and as a consequence to epileptic seizures. Identification of mechanisms regulating activity of the SLC12A5 gene, which encodes the KCC2 cotransporter, in normal and pathological conditions is, thus, of extreme importance. Multiple reports have previously elucidated in details a spatio-temporal pattern of KCC2 expression. Among the characteristic features are an exclusive neuronal specificity, a dramatic upregulation during embryonic and early postnatal development, and a significant downregulation by neuronal trauma. Numerous studies confirmed these expressional features, however transcriptional mechanisms predetermining the SLC12A5 gene behaviour are still unknown. The aim of the presented thesis is to recognize such transcriptional mechanisms and, on their basis, to create a transcriptional model that would explain the established SLC12A5 gene behaviour. Up to recently, only one KCC2 transcript has been thought to exist. A particular novelty of the presented work is the identification of two SLC12A5 gene promoters (SLC12A5-1a and SLC12A5-1b) that produce at least two KCC2 isoforms (KCC2a and KCC2b) differing by their N-terminal parts. Even though a functional 86Rb+ assay reveals no significant difference between transport activities of the isoforms, consensus sites for several protein kinases, found in KCC2a but not in KCC2b, imply a distinct kinetic regulation. As a logical continuation, the current work presents a detailed analysis of the KCC2a and KCC2b expression patterns. This analysis shows an exclusively neuron-specific pattern and similar expression levels for both isoforms during embryonic and neonatal development in rodents. During subsequent postnatal development, the KCC2b expression dramatically increases, while KCC2a expression, depending on central nervous system (CNS) area, either remains at the same level or moderately decreases. In an attempt to explain both the neuronal specificity and the distinct expressional kinetics of the KCC2a and KCC2b isoforms during postnatal development, the corresponding SLC12A5-1a and SLC12A5-1b promoters have been subjected to a comprehensive bioinformatical analysis. Binding sites of several transcription factors (TFs), conserved in the mammalian SLC12A5 gene orthologs, have been identified that might shed light on the observed behaviour of the SLC12A5 gene. Possible roles of these TFs in the regulating of the SLC12A5 gene expression have been elucidated in subsequent experiments and are discussed in the current thesis.
Resumo:
In mammals including humans, failure in blastocyst hatching and implantation leads to early embryonic loss and infertility. Prior to implantation, the blastocyst must hatch out of its acellular glycoprotein coat, the zona pellucida (ZP). The phenomenon of blastocyst hatching is believed to be regulated by (i) dynamic cellular components such as actin-based trophectodermal projections (TEPs), and (ii) a variety of autocrine and paracrine molecules such as growth factors, cytokines and proteases. The spatio-temporal regulation of zona lysis by blastocyst-derived cellular and molecular signaling factors is being keenly investigated. Our studies show that hamster blastocyst hatching is acelerated by growth factors such as heparin binding-epidermal growth factor and leukemia inhibitory factor and that embryo-derived, cysteine proteases including cathepsins are responsible for blastocyst hatching. Additionally, we believe that cyclooxygenase-generated prostaglandins, estradiol-17 beta mediated estrogen receptor-alpha signaling and possibly NF kappa B could be involved in peri-hatching development. Moreover, we show that TEPs are intimately involved with lysing ZP and that the TEPs potentially enrich and harbor hatching-enabling factors. These observations provide new insights into our understanding of the key cellular and molecular regulators involved in the phenomenon of mammalian blastocyst hatching, which is essential for the establishment of early pregnancy.
Resumo:
Predicting temporal responses of ecosystems to disturbances associated with industrial activities is critical for their management and conservation. However, prediction of ecosystem responses is challenging due to the complexity and potential non-linearities stemming from interactions between system components and multiple environmental drivers. Prediction is particularly difficult for marine ecosystems due to their often highly variable and complex natures and large uncertainties surrounding their dynamic responses. Consequently, current management of such systems often rely on expert judgement and/or complex quantitative models that consider only a subset of the relevant ecological processes. Hence there exists an urgent need for the development of whole-of-systems predictive models to support decision and policy makers in managing complex marine systems in the context of industry based disturbances. This paper presents Dynamic Bayesian Networks (DBNs) for predicting the temporal response of a marine ecosystem to anthropogenic disturbances. The DBN provides a visual representation of the problem domain in terms of factors (parts of the ecosystem) and their relationships. These relationships are quantified via Conditional Probability Tables (CPTs), which estimate the variability and uncertainty in the distribution of each factor. The combination of qualitative visual and quantitative elements in a DBN facilitates the integration of a wide array of data, published and expert knowledge and other models. Such multiple sources are often essential as one single source of information is rarely sufficient to cover the diverse range of factors relevant to a management task. Here, a DBN model is developed for tropical, annual Halophila and temperate, persistent Amphibolis seagrass meadows to inform dredging management and help meet environmental guidelines. Specifically, the impacts of capital (e.g. new port development) and maintenance (e.g. maintaining channel depths in established ports) dredging is evaluated with respect to the risk of permanent loss, defined as no recovery within 5 years (Environmental Protection Agency guidelines). The model is developed using expert knowledge, existing literature, statistical models of environmental light, and experimental data. The model is then demonstrated in a case study through the analysis of a variety of dredging, environmental and seagrass ecosystem recovery scenarios. In spatial zones significantly affected by dredging, such as the zone of moderate impact, shoot density has a very high probability of being driven to zero by capital dredging due to the duration of such dredging. Here, fast growing Halophila species can recover, however, the probability of recovery depends on the presence of seed banks. On the other hand, slow growing Amphibolis meadows have a high probability of suffering permanent loss. However, in the maintenance dredging scenario, due to the shorter duration of dredging, Amphibolis is better able to resist the impacts of dredging. For both types of seagrass meadows, the probability of loss was strongly dependent on the biological and ecological status of the meadow, as well as environmental conditions post-dredging. The ability to predict the ecosystem response under cumulative, non-linear interactions across a complex ecosystem highlights the utility of DBNs for decision support and environmental management.
Resumo:
The influence of atmospheric aerosols on Earth's radiation budget and hence climate, though well recognized and extensively investigated in recent years, remains largely uncertain mainly because of the large spatio-temporal heterogeneity and the lack of data with adequate resolution. To characterize this diversity, a major multi-platform field campaign ICARB (Integrated Campaign for Aerosols, gases and Radiation Budget) was carried out during the pre-monsoon period of 2006 over the Indian landmass and surrounding oceans, which was the biggest such campaign ever conducted over this region. Based on the extensive and concurrent measurements of the optical and physical properties of atmospheric aerosols during ICARB, the spatial distribution of aerosol radiative forcing was estimated over the entire Bay of Bengal (BoB), northern Indian Ocean and Arabian Sea (AS) as well as large spatial variations within these regions. Besides being considerably lower than the mean values reported earlier for this region, our studies have revealed large differences in the forcing components between the BoB and the AS. While the regionally averaged aerosol-induced atmospheric forcing efficiency was 31 +/- 6 W m(-2) tau(-1) for the BoB, it was only similar to 18 +/- 7 W m(-2) tau(-1) for the AS. Airborne measurements revealed the presence of strong, elevated aerosol layers even over the oceans, leading to vertical structures in the atmospheric forcing, resulting in significant warming in the lower troposphere. These observations suggest serious climate implications and raise issues ranging from the impact of aerosols on vertical thermal structure of the atmospheric and hence cloud formation processes to monsoon circulation.
Resumo:
Since 2007, close collaboration between the Learning and Teaching Unit’s Academic Quality and Standards team and the Department of Reporting and Analysis’ Business Objects team resulted in a generational approach to reporting where QUT established a place of trust. This place of trust is where data owners are confident in date storage, data integrity, reported and shared. While the role of the Department of Reporting and Analysis focused on the data warehouse, data security and publication of reports, the Academic Quality and Standards team focused on the application of learning analytics to solve academic research questions and improve student learning. Addressing questions such as: • Are all students who leave course ABC academically challenged? • Do the students who leave course XYZ stay within the faculty, university or leave? • When students withdraw from a unit do they stay enrolled on full or part load or leave? • If students enter through a particular pathway, what is their experience in comparison to other pathways? • With five years historic reporting, can a two-year predictive forecast provide any insight? In answering these questions, the Academic Quality and Standards team then developed prototype data visualisation through curriculum conversations with academic staff. Where these enquiries were applicable more broadly this information would be brought into the standardised reporting for the benefit of the whole institution. At QUT an annual report to the executive committees allows all stakeholders to record the performance and outcomes of all courses in a snapshot in time or use this live report at any point during the year. This approach to learning analytics was awarded the Awarded 2014 ATEM/Campus Review Best Practice Awards in Tertiary Education Management for The Unipromo Award for Excellence in Information Technology Management.
Resumo:
Fluctuation of field emission in carbon nanotubes (CNTs) is riot desirable in many applications and the design of biomedical x-ray devices is one of them. In these applications, it is of great importance to have precise control of electron beams over multiple spatio-temporal scales. In this paper, a new design is proposed in order to optimize the field emission performance of CNT arrays. A diode configuration is used for analysis, where arrays of CNTs act as cathode. The results indicate that the linear height distribution of CNTs, as proposed in this study, shows more stable performance than the conventionally used unifrom distribution.
Resumo:
Building on the launch of an early prototype at Balance Unbalance 2013, we now offer a fully realised experience of the ‘Long Time, No See?’ site specific walking/visualisation project for conference users to engage with on a do it yourself basis, either before, during or after the event. ‘Long Time, No See?’ is a new form of participatory, environmental futures project, designed for individuals and groups. It uses a smartphone APP to guide processes of individual or group walking at any chosen location—encouraging walkers to think in radical new ways about how to best prepare for ‘stormy’ environmental futures ahead. As part of their personal journeys participants’ contribute site-specific micro narratives in the form of texts, images and sounds, captured via the APP during the loosely ‘guided’ walk. These responses are then uploaded and synthesised into an ever-building audiovisual and generative artwork/‘map’ of future-thinking affinities, viewable both online at long-time-no-see.org (in Chrome) (and at the same time on a large screen visualisations at QUT’s Cube Centre in Brisbane Australia). The artwork therefore spans both participants’ mobile devices and laptops. If desired outcomes can also be presented publicly in large screen format at the conference. ‘Long Time, No See?’ has been developed over the past two years by a team of leading Australian artists, designers, urban/environmental planners and programmers.
Resumo:
In an estuary, mixing and dispersion resulting from turbulence and small scale fluctuation has strong spatio-temporal variability which cannot be resolved in conventional hydrodynamic models while some models employs parameterizations large water bodies. This paper presents small scale diffusivity estimates from high resolution drifters sampled at 10 Hz for periods of about 4 hours to resolve turbulence and shear diffusivity within a tidal shallow estuary (depth < 3 m). Taylor's diffusion theorem forms the basis of a first order estimate for the diffusivity scale. Diffusivity varied between 0.001 – 0.02 m2/s during the flood tide experiment. The diffusivity showed strong dependence (R2 > 0.9) on the horizontal mean velocity within the channel. Enhanced diffusivity caused by shear dispersion resulting from the interaction of large scale flow with the boundary geometries was observed. Turbulence within the shallow channel showed some similarities with the boundary layer flow which include consistency with slope of 5/3 predicted by Kolmogorov's similarity hypothesis within the inertial subrange. The diffusivities scale locally by 4/3 power law following Okubo's scaling and the length scale scales as 3/2 power law of the time scale. The diffusivity scaling herein suggests that the modelling of small scale mixing within tidal shallow estuaries can be approached from classical turbulence scaling upon identifying pertinent parameters.
Resumo:
This study offers a reconstruction and critical evaluation of globalization theory, a perspective that has been central for sociology and cultural studies in recent decades, from the viewpoint of media and communications. As the study shows, sociological and cultural globalization theorists rely heavily on arguments concerning media and communications, especially the so-called new information and communication technologies, in the construction of their frameworks. Together with deepening the understanding of globalization theory, the study gives new critical knowledge of the problematic consequences that follow from such strong investment in media and communications in contemporary theory. The book is divided into four parts. The first part presents the research problem, the approach and the theoretical contexts of the study. Followed by the introduction in Chapter 1, I identify the core elements of globalization theory in Chapter 2. At the heart of globalization theory is the claim that recent decades have witnessed massive changes in the spatio-temporal constitution of society, caused by new media and communications in particular, and that these changes necessitate the rethinking of the foundations of social theory as a whole. Chapter 3 introduces three paradigms of media research the political economy of media, cultural studies and medium theory the discussion of which will make it easier to understand the key issues and controversies that emerge in academic globalization theorists treatment of media and communications. The next two parts offer a close reading of four theorists whose works I use as entry points into academic debates on globalization. I argue that we can make sense of mainstream positions on globalization by dividing them into two paradigms: on the one hand, media-technological explanations of globalization and, on the other, cultural globalization theory. As examples of the former, I discuss the works of Manuel Castells (Chapter 4) and Scott Lash (Chapter 5). I maintain that their analyses of globalization processes are overtly media-centric and result in an unhistorical and uncritical understanding of social power in an era of capitalist globalization. A related evaluation of the second paradigm (cultural globalization theory), as exemplified by Arjun Appadurai and John Tomlinson, is presented in Chapter 6. I argue that due to their rejection of the importance of nation states and the notion of cultural imperialism for cultural analysis, and their replacement with a framework of media-generated deterritorializations and flows, these theorists underplay the importance of the neoliberalization of cultures throughout the world. The fourth part (Chapter 7) presents a central research finding of this study, namely that the media-centrism of globalization theory can be understood in the context of the emergence of neoliberalism. I find it problematic that at the same time when capitalist dynamics have been strengthened in social and cultural life, advocates of globalization theory have directed attention to media-technological changes and their sweeping socio-cultural consequences, instead of analyzing the powerful material forces that shape the society and the culture. I further argue that this shift serves not only analytical but also utopian functions, that is, the longing for a better world in times when such longing is otherwise considered impracticable.