999 resultados para POINT KERNELS
Resumo:
Currently, the GNSS computing modes are of two classes: network-based data processing and user receiver-based processing. A GNSS reference receiver station essentially contributes raw measurement data in either the RINEX file format or as real-time data streams in the RTCM format. Very little computation is carried out by the reference station. The existing network-based processing modes, regardless of whether they are executed in real-time or post-processed modes, are centralised or sequential. This paper describes a distributed GNSS computing framework that incorporates three GNSS modes: reference station-based, user receiver-based and network-based data processing. Raw data streams from each GNSS reference receiver station are processed in a distributed manner, i.e., either at the station itself or at a hosting data server/processor, to generate station-based solutions, or reference receiver-specific parameters. These may include precise receiver clock, zenith tropospheric delay, differential code biases, ambiguity parameters, ionospheric delays, as well as line-of-sight information such as azimuth and elevation angles. Covariance information for estimated parameters may also be optionally provided. In such a mode the nearby precise point positioning (PPP) or real-time kinematic (RTK) users can directly use the corrections from all or some of the stations for real-time precise positioning via a data server. At the user receiver, PPP and RTK techniques are unified under the same observation models, and the distinction is how the user receiver software deals with corrections from the reference station solutions and the ambiguity estimation in the observation equations. Numerical tests demonstrate good convergence behaviour for differential code bias and ambiguity estimates derived individually with single reference stations. With station-based solutions from three reference stations within distances of 22–103 km the user receiver positioning results, with various schemes, show an accuracy improvement of the proposed station-augmented PPP and ambiguity-fixed PPP solutions with respect to the standard float PPP solutions without station augmentation and ambiguity resolutions. Overall, the proposed reference station-based GNSS computing mode can support PPP and RTK positioning services as a simpler alternative to the existing network-based RTK or regionally augmented PPP systems.
Resumo:
Genomic sequences are fundamentally text documents, admitting various representations according to need and tokenization. Gene expression depends crucially on binding of enzymes to the DNA sequence at small, poorly conserved binding sites, limiting the utility of standard pattern search. However, one may exploit the regular syntactic structure of the enzyme's component proteins and the corresponding binding sites, framing the problem as one of detecting grammatically correct genomic phrases. In this paper we propose new kernels based on weighted tree structures, traversing the paths within them to capture the features which underpin the task. Experimentally, we and that these kernels provide performance comparable with state of the art approaches for this problem, while offering significant computational advantages over earlier methods. The methods proposed may be applied to a broad range of sequence or tree-structured data in molecular biology and other domains.
Resumo:
Rigid lenses, which were originally made from glass (between 1888 and 1940) and later from polymethyl methacrylate or silicone acrylate materials, are uncomfortable to wear and are now seldom fitted to new patients. Contact lenses became a popular mode of ophthalmic refractive error correction following the discovery of the first hydrogel material – hydroxyethyl methacrylate – by Czech chemist Otto Wichterle in 1960. To satisfy the requirements for ocular biocompatibility, contact lenses must be transparent and optically stable (for clear vision), have a low elastic modulus (for good comfort), have a hydrophilic surface (for good wettability), and be permeable to certain metabolites, especially oxygen, to allow for normal corneal metabolism and respiration during lens wear. A major breakthrough in respect of the last of these requirements was the development of silicone hydrogel soft lenses in 1999 and techniques for making the surface hydrophilic. The vast majority of contact lenses distributed worldwide are mass-produced using cast molding, although spin casting is also used. These advanced mass-production techniques have facilitated the frequent disposal of contact lenses, leading to improvements in ocular health and fewer complications. More than one-third of all soft contact lenses sold today are designed to be discarded daily (i.e., ‘daily disposable’ lenses).
Resumo:
Loop detectors are the oldest and widely used traffic data source. On urban arterials, they are mainly installed for signal control. Recently state of the art Bluetooth MAC Scanners (BMS) has significantly captured the interest of stakeholders for exploiting it for area wide traffic monitoring. Loop detectors provide flow- a fundamental traffic parameter; whereas BMS provides individual vehicle travel time between BMS stations. Hence, these two data sources complement each other, and if integrated should increase the accuracy and reliability of the traffic state estimation. This paper proposed a model that integrates loops and BMS data for seamless travel time and density estimation for urban signalised network. The proposed model is validated using both real and simulated data and the results indicate that the accuracy of the proposed model is over 90%.
Resumo:
This paper evaluates the performances of prediction intervals generated from alternative time series models, in the context of tourism forecasting. The forecasting methods considered include the autoregressive (AR) model, the AR model using the bias-corrected bootstrap, seasonal ARIMA models, innovations state space models for exponential smoothing, and Harvey’s structural time series models. We use thirteen monthly time series for the number of tourist arrivals to Hong Kong and Australia. The mean coverage rates and widths of the alternative prediction intervals are evaluated in an empirical setting. It is found that all models produce satisfactory prediction intervals, except for the autoregressive model. In particular, those based on the biascorrected bootstrap perform best in general, providing tight intervals with accurate coverage rates, especially when the forecast horizon is long.
Resumo:
This study reports on the utilisation of the Manchester Driver Behaviour Questionnaire (DBQ) to examine the self-reported driving behaviours of a large sample of Australian fleet drivers (N = 3414). Surveys were completed by employees before they commenced a one day safety workshop intervention. Factor analysis techniques identified a three factor solution similar to previous research, which was comprised of: (a) errors, (b) highway-code violations and (c) aggressive driving violations. Two items traditionally related with highway-code violations were found to be associated with aggressive driving behaviours among the current sample. Multivariate analyses revealed that exposure to the road, errors and self-reported offences predicted crashes at work in the last 12 months, while gender, highway violations and crashes predicted offences incurred while at work. Importantly, those who received more fines at work were at an increased risk of crashing the work vehicle. However, overall, the DBQ demonstrated limited efficacy at predicting these two outcomes. This paper outlines the major findings of the study in regards to identifying and predicting aberrant driving behaviours and also highlights implications regarding the future utilisation of the DBQ within fleet settings.
Resumo:
We investigate whether framing effects of voluntary contributions are significant in a provision point mechanism. Our results show that framing significantly affects individuals of the same type: cooperative individuals appear to be more cooperative in the public bads game than in the public goods game, whereas individualistic subjects appear to be less cooperative in the public bads game than in the public goods game. At the aggregate level of pooling all individuals, the data suggests that framing effects are negligible, which is in contrast with the established result.
Resumo:
Spatial data are now prevalent in a wide range of fields including environmental and health science. This has led to the development of a range of approaches for analysing patterns in these data. In this paper, we compare several Bayesian hierarchical models for analysing point-based data based on the discretization of the study region, resulting in grid-based spatial data. The approaches considered include two parametric models and a semiparametric model. We highlight the methodology and computation for each approach. Two simulation studies are undertaken to compare the performance of these models for various structures of simulated point-based data which resemble environmental data. A case study of a real dataset is also conducted to demonstrate a practical application of the modelling approaches. Goodness-of-fit statistics are computed to compare estimates of the intensity functions. The deviance information criterion is also considered as an alternative model evaluation criterion. The results suggest that the adaptive Gaussian Markov random field model performs well for highly sparse point-based data where there are large variations or clustering across the space; whereas the discretized log Gaussian Cox process produces good fit in dense and clustered point-based data. One should generally consider the nature and structure of the point-based data in order to choose the appropriate method in modelling a discretized spatial point-based data.
Resumo:
This paper is about localising across extreme lighting and weather conditions. We depart from the traditional point-feature-based approach as matching under dramatic appearance changes is a brittle and hard thing. Point feature detectors are fixed and rigid procedures which pass over an image examining small, low-level structure such as corners or blobs. They apply the same criteria applied all images of all places. This paper takes a contrary view and asks what is possible if instead we learn a bespoke detector for every place. Our localisation task then turns into curating a large bank of spatially indexed detectors and we show that this yields vastly superior performance in terms of robustness in exchange for a reduced but tolerable metric precision. We present an unsupervised system that produces broad-region detectors for distinctive visual elements, called scene signatures, which can be associated across almost all appearance changes. We show, using 21km of data collected over a period of 3 months, that our system is capable of producing metric localisation estimates from night-to-day or summer-to-winter conditions.
Resumo:
The Driver Behaviour Questionnaire (DBQ) continues to be the most widely utilised self-report scale globally to assess crash risk and aberrant driving behaviours among motorists. However, the scale also attracts criticism regarding its perceived limited ability to accurately identify those most at risk of crash involvement. This study reports on the utilisation of the DBQ to examine the self-reported driving behaviours (and crash outcomes) of drivers in three separate Australian fleet samples (N = 443, N = 3414, & N = 4792), and whether combining the samples increases the tool’s predictive ability. Either on-line or paper versions of the questionnaire were completed by fleet employees in three organisations. Factor analytic techniques identified either three or four factor solutions (in each of the separate studies) and the combined sample produced expected factors of: (a) errors, (b) highway-code violations and (c) aggressive driving violations. Highway code violations (and mean scores) were comparable across the studies. However, across the three samples, multivariate analyses revealed that exposure to the road was the best predictor of crash involvement at work, rather than DBQ constructs. Furthermore, combining the scores to produce a sample of 8649 drivers did not improve the predictive ability of the tool for identifying crashes (e.g., 0.4% correctly identified) or for demerit point loss (0.3%). The paper outlines the major findings of this comparative sample study in regards to utilising self-report measurement tools to identify “at risk” drivers as well as the application of such data to future research endeavours.
Resumo:
PURPOSE Every health care sector including hospice/palliative care needs to systematically improve services using patient-defined outcomes. Data from the national Australian Palliative Care Outcomes Collaboration aims to define whether hospice/palliative care patients' outcomes and the consistency of these outcomes have improved in the last 3 years. METHODS Data were analysed by clinical phase (stable, unstable, deteriorating, terminal). Patient-level data included the Symptom Assessment Scale and the Palliative Care Problem Severity Score. Nationally collected point-of-care data were anchored for the period July-December 2008 and subsequently compared to this baseline in six 6-month reporting cycles for all services that submitted data in every time period (n = 30) using individual longitudinal multi-level random coefficient models. RESULTS Data were analysed for 19,747 patients (46 % female; 85 % cancer; 27,928 episodes of care; 65,463 phases). There were significant improvements across all domains (symptom control, family care, psychological and spiritual care) except pain. Simultaneously, the interquartile ranges decreased, jointly indicating that better and more consistent patient outcomes were being achieved. CONCLUSION These are the first national hospice/palliative care symptom control performance data to demonstrate improvements in clinical outcomes at a service level as a result of routine data collection and systematic feedback.
Resumo:
Partial shading and rapidly changing irradiance conditions significantly impact on the performance of photovoltaic (PV) systems. These impacts are particularly severe in tropical regions where the climatic conditions result in very large and rapid changes in irradiance. In this paper, a hybrid maximum power point (MPP) tracking (MPPT) technique for PV systems operating under partially shaded conditions witapid irradiance change is proposed. It combines a conventional MPPT and an artificial neural network (ANN)-based MPPT. A low cost method is proposed to predict the global MPP region when expensive irradiance sensors are not available or are not justifiable for cost reasons. It samples the operating point on the stairs of I–V curve and uses a combination of the measured current value at each stair to predict the global MPP region. The conventional MPPT is then used to search within the classified region to get the global MPP. The effectiveness of the proposed MPPT is demonstrated using both simulations and an experimental setup. Experimental comparisons with four existing MPPTs are performed. The results show that the proposed MPPT produces more energy than the other techniques and can effectively track the global MPP with a fast tracking speed under various shading patterns.