170 resultados para Spatially
Resumo:
The purpose of this paper is to consider how libraries support the development of community networks both physically and digitally. To do this, a case-study methodology was employed, including a combination of data about the library and qualitative interviews with library users considering their experience of the library. This paper proposes that libraries act as ‘third places’ spatially connecting people; libraries also build links with online media and play a critical role in inclusively connecting non-technology users with the information on the Internet and digital technology more generally. The paper establishes the value of libraries in the digital age and recommends that libraries actively seek ways to develop links between non-technology users and activity on the Internet. It addresses the need to reach these types of non-technology users in different ways. Further, it suggests that libraries utilise their positioning as third places to create broader community networks, to support local communities beyond existing users and beyond the library precinct.
Resumo:
Regional innovation systems (RISs) literature has emphasized the critical role of interactive learning and knowledge exchange amongst firms and a variety of spatially connected innovation institutions as the foundation of regional innovation. Knowledge intermediaries have been analysed in terms of the technology-transaction services they provide firms and/or knowledge producers such as universities and therefore the role they play in facilitating interaction within the RIS. However, innovation also depends on the capability of the firm to learn. Some studies have suggested that intermediaries also play a role in that regard as participation in intermediary knowledge transfer programmes can contribute to the development of firm capabilities for problem-solving and learning. Our research is based on two case study intermediary programmes involving interviews with facilitators and participants. Our data show that knowledge intermediaries affect organizational learning capabilities by impacting on firms' network relationships, internal and external communication channels and internal learning processes which in turn affect the ability to interpret and use knowledge within the firm. This suggests that the role of knowledge intermediaries might be greater than facilitating interactions in the innovation system, as knowledge intermediation may affect the ability of firms to learn and absorb knowledge from their environment.
Resumo:
Readily accepted knowledge regarding crash causation is consistently omitted from efforts to model and subsequently understand motor vehicle crash occurrence and their contributing factors. For instance, distracted and impaired driving accounts for a significant proportion of crash occurrence, yet is rarely modeled explicitly. In addition, spatially allocated influences such as local law enforcement efforts, proximity to bars and schools, and roadside chronic distractions (advertising, pedestrians, etc.) play a role in contributing to crash occurrence and yet are routinely absent from crash models. By and large, these well-established omitted effects are simply assumed to contribute to model error, with predominant focus on modeling the engineering and operational effects of transportation facilities (e.g. AADT, number of lanes, speed limits, width of lanes, etc.) The typical analytical approach—with a variety of statistical enhancements—has been to model crashes that occur at system locations as negative binomial (NB) distributed events that arise from a singular, underlying crash generating process. These models and their statistical kin dominate the literature; however, it is argued in this paper that these models fail to capture the underlying complexity of motor vehicle crash causes, and thus thwart deeper insights regarding crash causation and prevention. This paper first describes hypothetical scenarios that collectively illustrate why current models mislead highway safety researchers and engineers. It is argued that current model shortcomings are significant, and will lead to poor decision-making. Exploiting our current state of knowledge of crash causation, crash counts are postulated to arise from three processes: observed network features, unobserved spatial effects, and ‘apparent’ random influences that reflect largely behavioral influences of drivers. It is argued; furthermore, that these three processes in theory can be modeled separately to gain deeper insight into crash causes, and that the model represents a more realistic depiction of reality than the state of practice NB regression. An admittedly imperfect empirical model that mixes three independent crash occurrence processes is shown to outperform the classical NB model. The questioning of current modeling assumptions and implications of the latent mixture model to current practice are the most important contributions of this paper, with an initial but rather vulnerable attempt to model the latent mixtures as a secondary contribution.
Resumo:
This collaborative project by Daniel Mafe and Andrew Brown, one of a number in they have been involved in together, conjoins painting and digital sound into a single, large scale, immersive exhibition/installation. The work as a whole acts as an interstitial point between contrasting approaches to abstraction: the visual and aural, the digital and analogue are pushed into an alliance and each works to alter perceptions of the other. For example, the paintings no longer mutely sit on the wall to be stared into. The sound seemingly emanating from each work shifts the viewer’s typical visual perception and engages their aural sensibilities. This seems to make one more aware of the objects as objects – the surface of each piece is brought into scrutiny – and immerses the viewer more viscerally within the exhibition. Similarly, the sonic experience is focused and concentrated spatially by each painted piece even as the exhibition is dispersed throughout the space. The sounds and images are similar in each local but not identical, even though they may seem to be the same from casual interaction, closer attention will quickly show this is not the case. In preparing this exhibition each artist has had to shift their mode of making to accommodate the other’s contribution. This was mainly done by a process of emptying whereby each was called upon to do less to the works they were making and to iterate the works toward a shared conception, blurring notions of individual imagination while maintaining material authorship. Empting was necessary to enable sufficient porosity where each medium allowed the other entry to its previously gated domain. The paintings are simple and subtle to allow the odd sonic textures a chance to work on the viewer’s engagement with them. The sound remains both abstract, using noise-like textures, and at a low volume to allow the audience’s attention to wander back and forth between aspects of the works.
Resumo:
A complex attack is a sequence of temporally and spatially separated legal and illegal actions each of which can be detected by various IDS but as a whole they constitute a powerful attack. IDS fall short of detecting and modeling complex attacks therefore new methods are required. This paper presents a formal methodology for modeling and detection of complex attacks in three phases: (1) we extend basic attack tree (AT) approach to capture temporal dependencies between components and expiration of an attack, (2) using enhanced AT we build a tree automaton which accepts a sequence of actions from input message streams from various sources if there is a traversal of an AT from leaves to root, and (3) we show how to construct an enhanced parallel automaton that has each tree automaton as a subroutine. We use simulation to test our methods, and provide a case study of representing attacks in WLANs.
Resumo:
Biological validation of new radiotherapy modalities is essential to understand their therapeutic potential. Antiprotons have been proposed for cancer therapy due to enhanced dose deposition provided by antiproton-nucleon annihilation. We assessed cellular DNA damage and relative biological effectiveness (RBE) of a clinically relevant antiproton beam. Despite a modest LET (~19 keV/μm), antiproton spread out Bragg peak (SOBP) irradiation caused significant residual γ-H2AX foci compared to X-ray, proton and antiproton plateau irradiation. RBE of ~1.48 in the SOBP and ~1 in the plateau were measured and used for a qualitative effective dose curve comparison with proton and carbon-ions. Foci in the antiproton SOBP were larger and more structured compared to X-rays, protons and carbon-ions. This is likely due to overlapping particle tracks near the annihilation vertex, creating spatially correlated DNA lesions. No biological effects were observed at 28–42 mm away from the primary beam suggesting minimal risk from long-range secondary particles.
Resumo:
We used Magnetic Resonance microimaging (μMRI) to study the compressive behaviour of synthetic elastin. Compression-induced changes in the elastin sample were quantified using longitudinal and transverse spin relaxation rates (R1 and R2, respectively). Spatially-resolved maps of each spin relaxation rate were obtained, allowing the heterogeneous texture of the sample to be observed with and without compression. Compression resulted in an increase of both the mean R1 and the mean R2, but most of this increase was due to sub-locations that exhibited relatively low R1 and R2 in the uncompressed state. This behaviour can be described by differential compression, where local domains in the hydrogel with a relatively low biopolymer content compress more than those with a relatively high biopolymer content.
Resumo:
In a recent paper, Gordon, Muratov, and Shvartsman studied a partial differential equation (PDE) model describing radially symmetric diffusion and degradation in two and three dimensions. They paid particular attention to the local accumulation time (LAT), also known in the literature as the mean action time, which is a spatially dependent timescale that can be used to provide an estimate of the time required for the transient solution to effectively reach steady state. They presented exact results for three-dimensional applications and gave approximate results for the two-dimensional analogue. Here we make two generalizations of Gordon, Muratov, and Shvartsman’s work: (i) we present an exact expression for the LAT in any dimension and (ii) we present an exact expression for the variance of the distribution. The variance provides useful information regarding the spread about the mean that is not captured by the LAT. We conclude by describing further extensions of the model that were not considered by Gordon,Muratov, and Shvartsman. We have found that exact expressions for the LAT can also be derived for these important extensions...
Resumo:
Deep Raman Spectroscopy is a domain within Raman spectroscopy consisting of techniques that facilitate the depth profiling of diffusely scattering media. Such variants include Time-Resolved Raman Spectroscopy (TRRS) and Spatially-Offset Raman Spectroscopy (SORS). A recent study has also demonstrated the integration of TRRS and SORS in the development of Time-Resolved Spatially-Offset Raman Spectroscopy (TR-SORS). This research demonstrates the application of specific deep Raman spectroscopic techniques to concealed samples commonly encountered in forensic and homeland security at various working distances. Additionally, the concepts behind these techniques are discussed at depth and prospective improvements to the individual techniques are investigated. Qualitative and quantitative analysis of samples based on spectral data acquired from SORS is performed with the aid of multivariate statistical techniques. By the end of this study, an objective comparison is made among the techniques within Deep Raman Spectroscopy based on their capabilities. The efficiency and quality of these techniques are determined based on the results procured which facilitates the understanding of the degree of selectivity for the deeper layer exhibited by the individual techniques relative to each other. TR-SORS was shown to exhibit an enhanced selectivity for the deeper layer relative to TRRS and SORS whilst providing spectral results with good signal-to-noise ratio. Conclusive results indicate that TR-SORS is a prospective deep Raman technique that offers higher selectivity towards deep layers and therefore enhances the non-invasive analysis of concealed substances from close range as well as standoff distances.
Resumo:
Temperate Australia sits between the heat engine of the tropics and the cold Southern Ocean, encompassing a range of rainfall regimes and falling under the influence of different climatic drivers. Despite this heterogeneity, broad-scale trends in climatic and environmental change are evident over the past 30 ka. During the early glacial period (∼30–22 ka) and the Last Glacial Maximum (∼22–18 ka), climate was relatively cool across the entire temperate zone and there was an expansion of grasslands and increased fluvial activity in regionally important Murray–Darling Basin. The temperate region at this time appears to be dominated by expanded sea ice in the Southern Ocean forcing a northerly shift in the position of the oceanic fronts and a concomitant influx of cold water along the southeast (including Tasmania) and southwest Australian coasts. The deglacial period (∼18–12 ka) was characterised by glacial recession and eventual disappearance resulting from an increase in temperature deduced from terrestrial records, while there is some evidence for climatic reversals (e.g. the Antarctic Cold Reversal) in high resolution marine sediment cores through this period. The high spatial density of Holocene terrestrial records reveals an overall expansion of sclerophyll woodland and rainforest taxa across the temperate region after ∼12 ka, presumably in response to increasing temperature, while hydrological records reveal spatially heterogeneous hydro-climatic trends. Patterns after ∼6 ka suggest higher frequency climatic variability that possibly reflects the onset of large scale climate variability caused by the El Niño/Southern Oscillation.
Resumo:
We present a method for optical encryption of information, based on the time-dependent dynamics of writing and erasure of refractive index changes in a bulk lithium niobate medium. Information is written into the photorefractive crystal with a spatially amplitude modulated laser beam which when overexposed significantly degrades the stored data making it unrecognizable. We show that the degradation can be reversed and that a one-to-one relationship exists between the degradation and recovery rates. It is shown that this simple relationship can be used to determine the erasure time required for decrypting the scrambled index patterns. In addition, this method could be used as a straightforward general technique for determining characteristic writing and erasure rates in photorefractive media.
Resumo:
Nitrous oxide emissions from soil are known to be spatially and temporally volatile. Reliable estimation of emissions over a given time and space depends on measuring with sufficient intensity but deciding on the number of measuring stations and the frequency of observation can be vexing. The question of low frequency manual observations providing comparable results to high frequency automated sampling also arises. Data collected from a replicated field experiment was intensively studied with the intention to give some statistically robust guidance on these issues. The experiment had nitrous oxide soil to air flux monitored within 10 m by 2.5 m plots by automated closed chambers under a 3 h average sampling interval and by manual static chambers under a three day average sampling interval over sixty days. Observed trends in flux over time by the static chambers were mostly within the auto chamber bounds of experimental error. Cumulated nitrous oxide emissions as measured by each system were also within error bounds. Under the temporal response pattern in this experiment, no significant loss of information was observed after culling the data to simulate results under various low frequency scenarios. Within the confines of this experiment observations from the manual chambers were not spatially correlated above distances of 1 m. Statistical power was therefore found to improve due to increased replicates per treatment or chambers per replicate. Careful after action review of experimental data can deliver savings for future work.
Resumo:
We have explored the potential of deep Raman spectroscopy, specifically surface enhanced spatially offset Raman spectroscopy (SESORS), for non-invasive detection from within animal tissue, by employing SERS-barcoded nanoparticle (NP) assemblies as the diagnostic agent. This concept has been experimentally verified in a clinic-relevant backscattered Raman system with an excitation line of 785 nm under ex vivo conditions. We have shown that our SORS system, with a fixed offset of 2-3 mm, offered sensitive probing of injected QTH-barcoded NP assemblies through animal tissue containing both protein and lipid. In comparison to that of non-aggregated SERS-barcoded gold NPs, we have demonstrated that the tailored SERS-barcoded aggregated NP assemblies have significantly higher detection sensitivity. We report that these NP assemblies can be readily detected at depths of 7-8 mm from within animal proteinaceous tissue with high signal-to-noise (S/N) ratio. In addition they could also be detected from beneath 1-2 mm of animal tissue with high lipid content, which generally poses a challenge due to high absorption of lipids in the near-infrared region. We have also shown that the signal intensity and S/N ratio at a particular depth is a function of the SERS tag concentration used and that our SORS system has a QTH detection limit of 10-6 M. Higher detection depths may possibly be obtained with optimization of the NP assemblies, along with improvements in the instrumentation. Such NP assemblies offer prospects for in vivo, non-invasive detection of tumours along with scope for incorporation of drugs and their targeted and controlled release at tumour sites. These diagnostic agents combined with drug delivery systems could serve as a “theranostic agent”, an integration of diagnostics and therapeutics into a single platform.
Resumo:
Most studies examining the temperature–mortality association in a city used temperatures from one site or the average from a network of sites. This may cause measurement error as temperature varies across a city due to effects such as urban heat islands. We examined whether spatiotemporal models using spatially resolved temperatures produced different associations between temperature and mortality compared with time series models that used non-spatial temperatures. We obtained daily mortality data in 163 areas across Brisbane city, Australia from 2000 to 2004. We used ordinary kriging to interpolate spatial temperature variation across the city based on 19 monitoring sites. We used a spatiotemporal model to examine the impact of spatially resolved temperatures on mortality. Also, we used a time series model to examine non-spatial temperatures using a single site and the average temperature from three sites. We used squared Pearson scaled residuals to compare model fit. We found that kriged temperatures were consistent with observed temperatures. Spatiotemporal models using kriged temperature data yielded slightly better model fit than time series models using a single site or the average of three sites' data. Despite this better fit, spatiotemporal and time series models produced similar associations between temperature and mortality. In conclusion, time series models using non-spatial temperatures were equally good at estimating the city-wide association between temperature and mortality as spatiotemporal models.
Resumo:
Accurately quantifying total freshwater storage methane release to atmosphere requires the spatial–temporal measurement of both diffusive and ebullitive emissions. Existing floating chamber techniques provide localised assessment of methane flux, however, significant errors can arise when weighting and extrapolation to the entire storage, particularly when ebullition is significant. An improved technique has been developed that compliments traditional chamber based experiments to quantify the storage-scale release of methane gas to atmosphere through ebullition using the measurements from an Optical Methane Detector (OMD) and a robotic boat. This provides a conservative estimate of the methane emission rate from ebullition along with the bubble volume distribution. It also georeferences the area of ebullition activity across entire storages at short temporal scales. An assessment on Little Nerang Dam in Queensland, Australia, demonstrated whole storage methane release significantly differed spatially and throughout the day. Total methane emission estimates showed a potential 32-fold variation in whole-of-dam rates depending on the measurement and extrapolation method and time of day used. The combined chamber and OMD technique showed that 1.8–7.0% of the surface area of Little Nerang Dam is accounting for up to 97% of total methane release to atmosphere throughout the day. Additionally, over 95% of detectable ebullition occurred in depths less than 12 m during the day and 6 m at night. This difference in spatial and temporal methane release rate distribution highlights the need to monitor significant regions of, if not the entire, water storage in order to provide an accurate estimate of ebullition rates and their contribution to annual methane emissions.