920 resultados para location-dependent data query
Resumo:
Digital rights management allows information owners to control the use and dissemination of electronic documents via a machine-readable licence. This paper describes the design and implementation of a system for creating and enforcing licences containing location constraints that can be used to restrict access to sensitive documents to a defined area. Documents can be loaded onto a portable device and used in the approved areas, but cannot be used if the device moves to another area. Our contribution includes a taxonomy for access control in the presence of requests to perform non-instantaneous controlled actions.
Resumo:
Phase-type distributions represent the time to absorption for a finite state Markov chain in continuous time, generalising the exponential distribution and providing a flexible and useful modelling tool. We present a new reversible jump Markov chain Monte Carlo scheme for performing a fully Bayesian analysis of the popular Coxian subclass of phase-type models; the convenient Coxian representation involves fewer parameters than a more general phase-type model. The key novelty of our approach is that we model covariate dependence in the mean whilst using the Coxian phase-type model as a very general residual distribution. Such incorporation of covariates into the model has not previously been attempted in the Bayesian literature. A further novelty is that we also propose a reversible jump scheme for investigating structural changes to the model brought about by the introduction of Erlang phases. Our approach addresses more questions of inference than previous Bayesian treatments of this model and is automatic in nature. We analyse an example dataset comprising lengths of hospital stays of a sample of patients collected from two Australian hospitals to produce a model for a patient's expected length of stay which incorporates the effects of several covariates. This leads to interesting conclusions about what contributes to length of hospital stay with implications for hospital planning. We compare our results with an alternative classical analysis of these data.
Resumo:
Chromatographic fingerprints of 46 Eucommia Bark samples were obtained by liquid chromatography-diode array detector (LC-DAD). These samples were collected from eight provinces in China, with different geographical locations, and climates. Seven common LC peaks that could be used for fingerprinting this common popular traditional Chinese medicine were found, and six were identified as substituted resinols (4 compounds), geniposidic acid and chlorogenic acid by LC-MS. Principal components analysis (PCA) indicated that samples from the Sichuan, Hubei, Shanxi and Anhui—the SHSA provinces, clustered together. The other objects from the four provinces, Guizhou, Jiangxi, Gansu and Henan, were discriminated and widely scattered on the biplot in four province clusters. The SHSA provinces are geographically close together while the others are spread out. Thus, such results suggested that the composition of the Eucommia Bark samples was dependent on their geographic location and environment. In general, the basis for discrimination on the PCA biplot from the original 46 objects× 7 variables data matrix was the same as that for the SHSA subset (36 × 7 matrix). The seven marker compound loading vectors grouped into three sets: (1) three closely correlating substituted resinol compounds and chlorogenic acid; (2) the fourth resinol compound identified by the OCH3 substituent in the R4 position, and an unknown compound; and (3) the geniposidic acid, which was independent of the set 1 variables, and which negatively correlated with the set 2 ones above. These observations from the PCA biplot were supported by hierarchical cluster analysis, and indicated that Eucommia Bark preparations may be successfully compared with the use of the HPLC responses from the seven marker compounds and chemometric methods such as PCA and the complementary hierarchical cluster analysis (HCA).
Resumo:
The paper analyses the expected value of OD volumes from probe with fixed error, error that is proportional to zone size and inversely proportional to zone size. To add realism to the analysis, real trip ODs in the Tokyo Metropolitan Region are synthesised. The results show that for small zone coding with average radius of 1.1km, and fixed measurement error of 100m, an accuracy of 70% can be expected. The equivalent accuracy for medium zone coding with average radius of 5km would translate into a fixed error of approximately 300m. As expected small zone coding is more sensitive than medium zone coding as the chances of the probe error envelope falling into adjacent zones are higher. For the same error radii, error proportional to zone size would deliver higher level of accuracy. As over half (54.8%) of the trip ends start or end at zone with equivalent radius of ≤ 1.2 km and only 13% of trips ends occurred at zones with equivalent radius ≥2.5km, measurement error that is proportional to zone size such as mobile phone would deliver higher level of accuracy. The synthesis of real OD with different probe error characteristics have shown that expected value of >85% is difficult to achieve for small zone coding with average radius of 1.1km. For most transport applications, OD matrix at medium zone coding is sufficient for transport management. From this study it can be drawn that GPS with error range between 2 and 5m, and at medium zone coding (average radius of 5km) would provide OD estimates greater than 90% of the expected value. However, for a typical mobile phone operating error range at medium zone coding the expected value would be lower than 85%. This paper assumes transmission of one origin and one destination positions from the probe. However, if multiple positions within the origin and destination zones are transmitted, map matching to transport network could be performed and it would greatly improve the accuracy of the probe data.
Resumo:
Total deposition of petrol, diesel and environmental tobacco smoke (ETS) aerosols in the human respiratory tract for nasal breathing conditions was computed for 14 nonsmoking volunteers, considering the specific anatomical and respiratory parameters of each volunteer and the specific size distribution for each inhalation experiment. Theoretical predictions were 34.6% for petrol, 24.0% for diesel, and 18.5% for ETS particles. Compared to the experimental results, predicted deposition values were consistently smaller than the measured data (41.4% for petrol, 29.6% for diesel, and 36.2% for ETS particles). The apparent discrepancy between experimental data on total deposition and modeling results may be reconciled by considering the non-spherical shape of the test aerosols by diameter-dependent dynamic shape factors to account for differences between mobility-equivalent and volume-equivalent or thermodynamic diameters. While the application of dynamic shape factors is able to explain the observed differences for petrol and diesel particles, additional mechanisms may be required for ETS particle deposition, such as the size reduction upon inspiration by evaporation of volatile compounds and/or condensation-induced restructuring, and, possibly, electrical charge effects.
Resumo:
The anisotropic pore structure and elasticity of cancellous bone cause wave speeds and attenuation in cancellous bone to vary with angle. Previously published predictions of the variation in wave speed with angle are reviewed. Predictions that allow tortuosity to be angle dependent but assume isotropic elasticity compare well with available data on wave speeds at large angles but less well for small angles near the normal to the trabeculae. Claims for predictions that only include angle-dependence in elasticity are found to be misleading. Audio-frequency data obtained at audio-frequencies in air-filled bone replicas are used to derive an empirical expression for the angle-and porosity-dependence of tortuosity. Predictions that allow for either angle dependent tortuosity or angle dependent elasticity or both are compared with existing data for all angles and porosities.
Resumo:
Purpose: There have been few studies of visual temporal processing of myopic eyes. This study investigated the visual performance of emmetropic and myopic eyes using a backward visual masking location task. Methods: Data were collected for 39 subjects (15 emmetropes, 12 stable myopes, 12 progressing myopes). In backward visual masking, a target’s visibility is reduced by a mask presented in quick succession ‘after’ the target. The target and mask stimuli were presented at different interstimulus intervals (from 12 to 300 ms). The task involved locating the position of a target letter with both a higher (seven per cent) and a lower (five per cent) contrast. Results: Emmetropic subjects had significantly better performance for the lower contrast location task than the myopes (F2,36 = 22.88; p < 0.001) but there was no difference between the progressing and stable myopic groups (p = 0.911). There were no differences between the groups for the higher contrast location task (F2,36 = 0.72, p = 0.495). No relationship between task performance and either the magnitude of myopia or axial length was found for either task. Conclusions: A location task deficit was observed in myopes only for lower contrast stimuli. Both emmetropic and myopic groups had better performance for the higher contrast task compared to the lower contrast task, with myopes showing considerable improvement. This suggests that five per cent contrast may be the contrast threshold required to bias the task towards the magnocellular system (where myopes have a temporal processing deficit). Alternatively, the task may be sensitive to the contrast sensitivity of the observer.
Resumo:
User-Based intelligent systems are already commonplace in a student’s online digital life. Each time they browse, search, buy, join, comment, play, travel, upload, download, a system collects, analyses and processes data in an effort to customise content and further improve services. This panel session will explore how intelligent systems, particularly those that gather data from mobile devices, can offer new possibilities to assist in the delivery of customised, personal and engaging learning experiences. The value of intelligent systems for education lies in their ability to formulate authentic and complex learner profiles that bring together and systematically integrate a student’s personal world with a formal curriculum framework. As we well know, a mobile device can collect data relating to a student’s interests (gathered from search history, applications and communications), location, surroundings and proximity to others (GPS, Bluetooth). However, what has been less explored is the opportunity for a mobile device to map the movements and activities of a student from moment to moment and over time. This longitudinal data provides a holistic profile of a student, their state and surroundings. Analysing this data may allow us to identify patterns that reveal a student’s learning processes; when and where they work best and for how long. Through revealing a student’s state and surroundings outside of schools hour, this longitudinal data may also highlight opportunities to transform a student’s everyday world into an inventory for learning, punctuating their surroundings with learning recommendations. This would in turn lead to new ways to acknowledge and validate and foster informal learning, making it legitimate within a formal curriculum.
Resumo:
Aims: To describe a local data linkage project to match hospital data with the Australian Institute of Health and Welfare (AIHW) National Death Index (NDI) to assess longterm outcomes of intensive care unit patients. Methods: Data were obtained from hospital intensive care and cardiac surgery databases on all patients aged 18 years and over admitted to either of two intensive care units at a tertiary-referral hospital between 1 January 1994 and 31 December 2005. Date of death was obtained from the AIHW NDI by probabilistic software matching, in addition to manual checking through hospital databases and other sources. Survival was calculated from time of ICU admission, with a censoring date of 14 February 2007. Data for patients with multiple hospital admissions requiring intensive care were analysed only from the first admission. Summary and descriptive statistics were used for preliminary data analysis. Kaplan-Meier survival analysis was used to analyse factors determining long-term survival. Results: During the study period, 21 415 unique patients had 22 552 hospital admissions that included an ICU admission; 19 058 surgical procedures were performed with a total of 20 092 ICU admissions. There were 4936 deaths. Median follow-up was 6.2 years, totalling 134 203 patient years. The casemix was predominantly cardiac surgery (80%), followed by cardiac medical (6%), and other medical (4%). The unadjusted survival at 1, 5 and 10 years was 97%, 84% and 70%, respectively. The 1-year survival ranged from 97% for cardiac surgery to 36% for cardiac arrest. An APACHE II score was available for 16 877 patients. In those discharged alive from hospital, the 1, 5 and 10-year survival varied with discharge location. Conclusions: ICU-based linkage projects are feasible to determine long-term outcomes of ICU patients
Resumo:
Alexithymia is characterised by deficits in emotional insight and self reflection, that impact on the efficacy of psychological treatments. Given the high prevalence of alexithymia in Alcohol Use Disorders, valid assessment tools are critical. The majority of research on the relationship between alexithymia and alcohol-dependence has employed the self-administered Toronto Alexithymia Scale (TAS-20). The Observer Alexithymia Scale (OAS) has also been recommended. The aim of the present study was to assess the validity and reliability of the OAS and the TAS-20 in an alcohol-dependent sample. Two hundred and ten alcohol-dependent participants in an outpatient Cognitive Behavioral Treatment program were administered the TAS-20 at assessment and upon treatment completion at 12 weeks. Clinical psychologists provided observer assessment data for a subsample of 159 patients. The findings confirmed acceptable internal consistency, test-retest reliability and scale homogeneity for both the OAS and TAS-20, except for the low internal consistency of the TAS-20 EOT scale. The TAS-20 was more strongly associated with alcohol problems than the OAS.
Resumo:
We present algorithms, systems, and experimental results for underwater data muling. In data muling a mobile agent interacts with static agents to upload, download, or transport data to a different physical location. We consider a system comprising an Autonomous Underwater Vehicle (AUV) and many static Underwater Sensor Nodes (USN) networked together optically and acoustically. The AUV can locate the static nodes using vision and hover above the static nodes for data upload. We describe the hardware and software architecture of this underwater system, as well as experimental data. © 2006 IEEE.
Resumo:
Current-voltage (I-V) curves of Poly(3-hexyl-thiophene) (P3HT) diodes have been collected to investigate the polymer hole-dominated charge transport. At room temperature and at low electric fields the I-V characteristic is purely Ohmic whereas at medium-high electric fields, experimental data shows that the hole transport is Trap Dominated - Space Charge Limited Current (TD-SCLC). In this regime, it is possible to extract the I-V characteristic of the P3HT/Al junction showing the ideal Schottky diode behaviour over five orders of magnitude. At high-applied electric fields, holes’ transport is found to be in the trap free SCLC regime. We have measured and modelled in this regime the holes’ mobility to evaluate its dependence from the electric field applied and the temperature of the device.
Resumo:
Purpose – The purpose of this paper is to examine the buyer awareness and acceptance of environmental and energy efficiency measures in the New Zealand residential property markets. This study aims to provide a greater understanding of consumer behaviour in the residential property market in relation to green housing issues ---------- Design/methodology/approach – The paper is based on an extensive survey of Christchurch real estate offices and was designed to gather data on the factors that were considered important by buyers in the residential property market. The survey was designed to allow these factors to be analysed on a socio-economic basis and to compare buyer behaviour based on property values. ---------- Findings – The results show that regardless of income levels, buyers still consider that the most important factor in the house purchase decision is the location of the property and price. Although the awareness of green housing issues and energy efficiency in housing is growing in the residential property market, it is only a major consideration for young and older buyers in the high income brackets and is only of some importance for all other buyer sectors of the residential property market. Many of the voluntary measures introduced by Governments to improve the energy efficiency of residential housing are still not considered important by buyers, indicating that a more mandatory approach may have to be undertaken to improve energy efficiency in the established housing market, as these measures are not valued by the buyer. ---------- Originality/value – The paper confirms the variations in real estate buyer behaviour across the full range of residential property markets and the acceptance and awareness of green housing issues and measures. These results would be applicable to most established and transparent residential property markets.
Resumo:
This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.
Resumo:
Data breach notification laws require organisations to notify affected persons or regulatory authorities when an unauthorised acquisition of personal data occurs. Most laws provide a safe harbour to this obligation if acquired data has been encrypted. There are three types of safe harbour: an exemption; a rebuttable presumption and factor-based analysis. We demonstrate, using three condition-based scenarios, that the broad formulation of most encryption safe harbours is based on the flawed assumption that encryption is the silver bullet for personal information protection. We then contend that reliance upon an encryption safe harbour should be dependent upon a rigorous and competent risk-based review that is required on a case-by-case basis. Finally, we recommend the use of both an encryption safe harbour and a notification trigger as our preferred choice for a data breach notification regulatory framework.