82 resultados para spatial and amplitude tapering


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dengue has been a major public health concern in Australia since it re-emerged in Queensland in 1992-1993. This study explored spatio-temporal distribution and clustering of locally-acquired dengue cases in Queensland State, Australia and identified target areas for effective interventions. A computerised locally-acquired dengue case dataset was collected from Queensland Health for Queensland from 1993 to 2012. Descriptive spatial and temporal analyses were conducted using geographic information system tools and geostatistical techniques. Dengue hot spots were detected using SatScan method. Descriptive spatial analysis showed that a total of 2,398 locally-acquired dengue cases were recorded in central and northern regions of tropical Queensland. A seasonal pattern was observed with most of the cases occurring in autumn. Spatial and temporal variation of dengue cases was observed in the geographic areas affected by dengue over time. Tropical areas are potential high-risk areas for mosquito-borne diseases such as dengue. This study demonstrated that the locally-acquired dengue cases have exhibited a spatial and temporal variation over the past twenty years in tropical Queensland, Australia. There is a clear evidence for the existence of statistically significant clusters of dengue and these clusters varied over time. These findings enabled us to detect and target dengue clusters suggesting that the use of geospatial information can assist the health authority in planning dengue control activities and it would allow for better design and implementation of dengue management programs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Healthy transparent cornea depends upon the regulation of fluid, nutrient and oxygen transport through the tissue to sustain cell metabolism and other critical processes for normal functioning. This research considers the corneal geometry and investigates oxygen distribution using a two-dimensional Monod kinetic model, showing that previous studies make assumptions that lead to predictions of near-anoxic levels of oxygen tension in the limbal regions of the cornea. It also considers the comparison of experimental spatial and temporal data with the predictions of novel mathematical models with respect to distributed mitotic rates during corneal epithelial wound healing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many forms of formative feedback are used in dance training to refine the dancer’s spatial and kinaesthetic awareness in order that the dancer’s sensorimotor intentions and observable danced outcomes might converge. This paper documents the use of smartphones to record and playback movement sequences in ballet and contemporary technique classes. Peers in pairs took turns filming one another and then analysing the playback. This provided immediate visual feedback of the movement sequence as performed by each dancer. This immediacy facilitated the dancer’s capacity to associate what they felt as they were dancing with what they looked like during the dance. The often-dissonant realities of self-perception and perception by others were thus guided towards harmony, generating improved performance and knowledge relating to dance technique. An approach is offered for potential development of peer review activities to support summative progressive assessment in dance technique training.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigates the implications of the introduction of electric lighting systems, building technologies, and theories of worker efficiency on the deep spatial and environmental transformations that occurred within the corporate workplace during the twentieth century. Examining the shift from daylighting strategies to largely artificially lit workplace environments, this paper argues that electric lighting significantly contributed to the architectural rationalization of both office work and the modern office environment. Contesting the historical and critical marginalization of lighting within the discourse of the modern built environment, this study calls for a reassessment of the role of artificial lighting in the development of the modern corporate workplace. Keywords: daylighting, fluorescent lighting, rationalization, workplace design

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction The rapidly burgeoning popularity of cinema at the beginning of the 20th century favored industrialized modes of creativity organized around large production studios that could churn out a steady stream of narrative feature films. By the mid-1910s, a handful of Hollywood studios became leaders in the production, distribution, and exhibition of popular commercial movies. In order to serve incessant demand for new titles, the studios relied on a set of conventions that allowed them to regularize production and realize workplace efficiencies. This entailed a socialized mode of creativity that would later be adopted by radio and television broadcasters. It would also become a model for cinema and media production around the world, both for commercial and state-supported institutions. Even today the core tenets of industrialized creativity prevail in most large media enterprises. During the 1980s and 1990s, however, media industries began to change radically, driven by forces of neoliberalism, corporate conglomeration, globalization, and technological innovation. Today, screen media are created both by large-scale production units and by networked ensembles of talent and skilled labor. Moreover, digital media production may take place in small shops or via the collective labor of media users or fans who have attracted attention due to their hyphenated status as both producers and users of media (i.e., “prosumers”). Studies of screen media labor fall into five conceptual and methodological categories: historical studies of labor relations, ethnographically inspired investigations of workplace dynamics, critical analyses of the spatial and social organization of labor, and normative assessments of industrialized creativity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deep convolutional neural networks (DCNNs) have been employed in many computer vision tasks with great success due to their robustness in feature learning. One of the advantages of DCNNs is their representation robustness to object locations, which is useful for object recognition tasks. However, this also discards spatial information, which is useful when dealing with topological information of the image (e.g. scene labeling, face recognition). In this paper, we propose a deeper and wider network architecture to tackle the scene labeling task. The depth is achieved by incorporating predictions from multiple early layers of the DCNN. The width is achieved by combining multiple outputs of the network. We then further refine the parsing task by adopting graphical models (GMs) as a post-processing step to incorporate spatial and contextual information into the network. The new strategy for a deeper, wider convolutional network coupled with graphical models has shown promising results on the PASCAL-Context dataset.