76 resultados para weighting


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Forecasts of volatility and correlation are important inputs into many practical financial problems. Broadly speaking, there are two ways of generating forecasts of these variables. Firstly, time-series models apply a statistical weighting scheme to historical measurements of the variable of interest. The alternative methodology extracts forecasts from the market traded value of option contracts. An efficient options market should be able to produce superior forecasts as it utilises a larger information set of not only historical information but also the market equilibrium expectation of options market participants. While much research has been conducted into the relative merits of these approaches, this thesis extends the literature along several lines through three empirical studies. Firstly, it is demonstrated that there exist statistically significant benefits to taking the volatility risk premium into account for the implied volatility for the purposes of univariate volatility forecasting. Secondly, high-frequency option implied measures are shown to lead to superior forecasts of the intraday stochastic component of intraday volatility and that these then lead on to superior forecasts of intraday total volatility. Finally, the use of realised and option implied measures of equicorrelation are shown to dominate measures based on daily returns.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Australian e-Health Research Centre and Queensland University of Technology recently participated in the TREC 2012 Medical Records Track. This paper reports on our methods, results and experience using an approach that exploits the concept and inter-concept relationships defined in the SNOMED CT medical ontology. Our concept-based approach is intended to overcome specific challenges in searching medical records, namely vocabulary mismatch and granularity mismatch. Queries and documents are transformed from their term-based originals into medical concepts as defined by the SNOMED CT ontology, this is done to tackle vocabulary mismatch. In addition, we make use of the SNOMED CT parent-child `is-a' relationships between concepts to weight documents that contained concept subsumed by the query concepts; this is done to tackle the problem of granularity mismatch. Finally, we experiment with other SNOMED CT relationships besides the is-a relationship to weight concepts related to query concepts. Results show our concept-based approach performed significantly above the median in all four performance metrics. Further improvements are achieved by the incorporation of weighting subsumed concepts, overall leading to improvement above the median of 28% infAP, 10% infNDCG, 12% R-prec and 7% Prec@10. The incorporation of other relations besides is-a demonstrated mixed results, more research is required to determined which SNOMED CT relationships are best employed when weighting related concepts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The adequacy of the UV Index (UVI), a simple measure of ambient solar ultraviolet (UV) radiation, has been questioned on the basis of recent scientific data on the importance of vitamin D for human health, the mutagenic capacity of radiation in the UVA wavelength, and limitations in the behavioral impact of the UVI as a public awareness tool. A working group convened by ICNIRP and WHO met to assess whether modifications of the UVI were warranted and to discuss ways of improving its effectiveness as a guide to healthy sun-protective behavior. A UV Index greater than 3 was confirmed as indicating ambient UV levels at which harmful sun exposure and sunburns could occur and hence as the threshold for promoting preventive messages. There is currently insufficient evidence about the quantitative relationship of sun exposure, vitamin D, and human health to include vitamin D considerations in sun protection recommendations. The role of UVA in sunlight-induced dermal immunosuppression and DNA damage was acknowledged, but the contribution of UVA to skin carcinogenesis could not be quantified precisely. As ambient UVA and UVB levels mostly vary in parallel in real life situations, any minor modification of the UVI weighting function with respect to UVA-induced skin cancer would not be expected to have a significant impact on the UV Index. Though it has been shown that the UV Index can raise awareness of the risk of UV radiation to some extent, the UVI does not appear to change attitudes to sun protection or behavior in the way it is presently used. Changes in the UVI itself were not warranted based on these findings, but rather research testing health behavior models, including the roles of self-efficacy and self-affirmation in relation to intention to use sun protection among different susceptible groups, should be carried out to develop more successful strategies toward improving sun protection behavior. Health Phys. 103(3):301-306; 2012

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current state of the art robot mapping and navigation systems produce impressive performance under a narrow range of robot platform, sensor and environmental conditions, in contrast to animals such as rats that produce “good enough” maps that enable them to function under an incredible range of situations. In this paper we present a rat-inspired featureless sensor-fusion system that assesses the usefulness of multiple sensor modalities based on their utility and coherence for place recognition during a navigation task, without knowledge as to the type of sensor. We demonstrate the system on a Pioneer robot in indoor and outdoor environments with abrupt lighting changes. Through dynamic weighting of the sensors, the system is able to perform correct place recognition and mapping where the static sensor weighting approach fails.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one pre-processes the source image and template/model with a bank of filters (e.g. oriented edges, Gabor, etc.) as: (i) it can handle substantial illumination variations, (ii) the inefficient pre-processing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, (iii) unlike traditional LK the computational cost is invariant to the number of filters and as a result far more efficient, and (iv) this approach can be extended to the inverse compositional form of the LK algorithm where nearly all steps (including Fourier transform and filter bank pre-processing) can be pre-computed leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to non-rigid object alignment tasks that are considered extensions of the LK algorithm such as those found in Active Appearance Models (AAMs).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: Photoreceptor interactions reduce the temporal bandwidth of the visual system under mesopic illumination. The dynamics of these interactions are not clear. This study investigated cone-cone and rod-cone interactions when the rod (R) and three cone (L, M, S) photoreceptor classes contribute to vision via shared post-receptoral pathways. Methods: A four-primary photostimulator independently controlled photoreceptor activity in human observers. To determine the temporal dynamics of receptoral (L, S, R) and post-receptoral (LMS, LMSR, +L-M) pathways (5 Td, 7° eccentricity) in Experiment 1, ON-pathway sensitivity was assayed with an incremental probe (25ms) presented relative to onset of an incremental sawtooth conditioning pulse (1000ms). To define the post-receptoral pathways mediating the rod stimulus, Experiment 2 matched the color appearance of increased rod activation (30% contrast, 25-1000ms; constant cone excitation) with cone stimuli (variable L+M, L/L+M, S/L+M; constant rod excitation). Results: Cone-cone interactions with luminance stimuli (LMS, LMSR, L-cone) reduced Weber contrast sensitivity by 13% and the time course of adaptation was 23.7±1ms (μ±SE). With chromatic stimuli (+L-M, S), cone pathway sensitivity was also reduced and recovery was slower (+L-M 8%, 2.9±0.1ms; S 38%, 1.5±0.3ms). Threshold patterns at ON-conditioning pulse onset were monophasic for luminance and biphasic for chromatic stimuli. Rod-rod interactions increased sensitivity(19%) with a recovery time of 0.7±0.2ms. Compared to cone-cone interactions, rod-cone interactions with luminance stimuli reduced sensitivity to a lesser degree (5%) with faster recovery (42.9±0.7ms). Rod-cone interactions were absent with chromatic stimuli. Experiment 2 showed that rod activation generated luminance (L+M) signals at all durations, and chromatic signals (L/L+M, S/L+M) for durations >75ms. Conclusions: Temporal dynamics of cone-cone interactions are consistent with contrast sensitivity loss in the MC pathway for luminance stimuli and chromatically opponent responses in the PC and KC pathway with chromatic stimuli. Rod-cone interactions limit contrast sensitivity loss during dynamic illumination changes and increase the speed of mesopic light adaptation. The change in relative weighting of the temporal rod signal within the major post-receptoral pathways modifies the sensitivity and dynamics of photoreceptor interactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study investigates the time-course and post-receptoral pathway signaling of photoreceptor interactions when the rod (R) and three cone (L, M, S) photoreceptor classes contribute to mesopic vision. A four-primary photostimulator independently controls photoreceptor activity in human observers. The first experiment defines the temporal adaptation response of receptoral (L-, S-cone, rod) and post-receptoral (LMS, LMSR,+L-M) signaling and interactions. Here we show that nonopponent cone-cone interactions (L-cone, LMS, LMSR) have monophasic temporal response patterns whereas opponent signals (+L-M, S-cone) show biphasic response patterns with slower recovery. By comparison, rod-cone interactions with nonopponent signals have faster adaptation responses and reduced sensitivity loss whereas opponent rod-cone interactions are small or absent. Additionally, the rod-rod interaction differs from these interaction types and acts to increase rod sensitivity due to temporal summation but with a slower time course. The second experiment shows that the temporal profile of the rod signal alters the relative rod contributions to the three primary post-receptoral pathways. We demonstrate that rod signals generate luminance (þLþM) signals mediated via the MC pathway with all rod temporal profiles and chromatic signals (L/LþM, S/LþM) in both the PC and KC pathways with durations .75 ms. Thus, we propose that the change in relative weighting of rod signals within the post-receptoral pathways contributes to the sensitivity and temporal response of rod and cone pathway signaling and interactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we consider a space fractional advection–dispersion equation. The equation is obtained from the standard advection–diffusion equation by replacing the first- and second-order space derivatives by the Riesz fractional derivatives of order β1 ∈ (0, 1) and β2 ∈ (1, 2], respectively. The fractional advection and dispersion terms are approximated by using two fractional centred difference schemes. A new weighted Riesz fractional finite-difference approximation scheme is proposed. When the weighting factor θ equals 12, a second-order accuracy scheme is obtained. The stability, consistency and convergence of the numerical approximation scheme are discussed. A numerical example is given to show that the numerical results are in good agreement with our theoretical analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this PhD research program is to investigate numerical methods for simulating variably-saturated flow and sea water intrusion in coastal aquifers in a high-performance computing environment. The work is divided into three overlapping tasks: to develop an accurate and stable finite volume discretisation and numerical solution strategy for the variably-saturated flow and salt transport equations; to implement the chosen approach in a high performance computing environment that may have multiple GPUs or CPU cores; and to verify and test the implementation. The geological description of aquifers is often complex, with porous materials possessing highly variable properties, that are best described using unstructured meshes. The finite volume method is a popular method for the solution of the conservation laws that describe sea water intrusion, and is well-suited to unstructured meshes. In this work we apply a control volume-finite element (CV-FE) method to an extension of a recently proposed formulation (Kees and Miller, 2002) for variably saturated groundwater flow. The CV-FE method evaluates fluxes at points where material properties and gradients in pressure and concentration are consistently defined, making it both suitable for heterogeneous media and mass conservative. Using the method of lines, the CV-FE discretisation gives a set of differential algebraic equations (DAEs) amenable to solution using higher-order implicit solvers. Heterogeneous computer systems that use a combination of computational hardware such as CPUs and GPUs, are attractive for scientific computing due to the potential advantages offered by GPUs for accelerating data-parallel operations. We present a C++ library that implements data-parallel methods on both CPU and GPUs. The finite volume discretisation is expressed in terms of these data-parallel operations, which gives an efficient implementation of the nonlinear residual function. This makes the implicit solution of the DAE system possible on the GPU, because the inexact Newton-Krylov method used by the implicit time stepping scheme can approximate the action of a matrix on a vector using residual evaluations. We also propose preconditioning strategies that are amenable to GPU implementation, so that all computationally-intensive aspects of the implicit time stepping scheme are implemented on the GPU. Results are presented that demonstrate the efficiency and accuracy of the proposed numeric methods and formulation. The formulation offers excellent conservation of mass, and higher-order temporal integration increases both numeric efficiency and accuracy of the solutions. Flux limiting produces accurate, oscillation-free solutions on coarse meshes, where much finer meshes are required to obtain solutions with equivalent accuracy using upstream weighting. The computational efficiency of the software is investigated using CPUs and GPUs on a high-performance workstation. The GPU version offers considerable speedup over the CPU version, with one GPU giving speedup factor of 3 over the eight-core CPU implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite its potential multiple contributions to sustainable policy objectives, urban transit is generally not widely used by the public in terms of its market share compared to that of automobiles, particularly in affluent societies with low-density urban forms like Australia. Transit service providers need to attract more people to transit by improving transit quality of service. The key to cost-effective transit service improvements lies in accurate evaluation of policy proposals by taking into account their impacts on transit users. If transit providers knew what is more or less important to their customers, they could focus their efforts on optimising customer-oriented service. Policy interventions could also be specified to influence transit users’ travel decisions, with targets of customer satisfaction and broader community welfare. This significance motivates the research into the relationship between urban transit quality of service and its user perception as well as behaviour. This research focused on two dimensions of transit user’s travel behaviour: route choice and access arrival time choice. The study area chosen was a busy urban transit corridor linking Brisbane central business district (CBD) and the St. Lucia campus of The University of Queensland (UQ). This multi-system corridor provided a ‘natural experiment’ for transit users between the CBD and UQ, as they can choose between busway 109 (with grade-separate exclusive right-of-way), ordinary on-street bus 412, and linear fast ferry CityCat on the Brisbane River. The population of interest was set as the attendees to UQ, who travelled from the CBD or from a suburb via the CBD. Two waves of internet-based self-completion questionnaire surveys were conducted to collect data on sampled passengers’ perception of transit service quality and behaviour of using public transit in the study area. The first wave survey is to collect behaviour and attitude data on respondents’ daily transit usage and their direct rating of importance on factors of route-level transit quality of service. A series of statistical analyses is conducted to examine the relationships between transit users’ travel and personal characteristics and their transit usage characteristics. A factor-cluster segmentation procedure is applied to respodents’ importance ratings on service quality variables regarding transit route preference to explore users’ various perspectives to transit quality of service. Based on the perceptions of service quality collected from the second wave survey, a series of quality criteria of the transit routes under study was quantitatively measured, particularly, the travel time reliability in terms of schedule adherence. It was proved that mixed traffic conditions and peak-period effects can affect transit service reliability. Multinomial logit models of transit user’s route choice were estimated using route-level service quality perceptions collected in the second wave survey. Relative importance of service quality factors were derived from choice model’s significant parameter estimates, such as access and egress times, seat availability, and busway system. Interpretations of the parameter estimates were conducted, particularly the equivalent in-vehicle time of access and egress times, and busway in-vehicle time. Market segmentation by trip origin was applied to investigate the difference in magnitude between the parameter estimates of access and egress times. The significant costs of transfer in transit trips were highlighted. These importance ratios were applied back to quality perceptions collected as RP data to compare the satisfaction levels between the service attributes and to generate an action relevance matrix to prioritise attributes for quality improvement. An empirical study on the relationship between average passenger waiting time and transit service characteristics was performed using the service quality perceived. Passenger arrivals for services with long headways (over 15 minutes) were found to be obviously coordinated with scheduled departure times of transit vehicles in order to reduce waiting time. This drove further investigations and modelling innovations in passenger’ access arrival time choice and its relationships with transit service characteristics and average passenger waiting time. Specifically, original contributions were made in formulation of expected waiting time, analysis of the risk-aversion attitude to missing desired service run in the passengers’ access time arrivals’ choice, and extensions of the utility function specification for modelling passenger access arrival distribution, by using complicated expected utility forms and non-linear probability weighting to explicitly accommodate the risk of missing an intended service and passenger’s risk-aversion attitude. Discussions on this research’s contributions to knowledge, its limitations, and recommendations for future research are provided at the concluding section of this thesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The experiences of people affected by cancer are at the very heart of nursing research efforts. Because much of our work is focused on understanding how to improve experiences and outcomes for people with cancer, it is easy for us to believe that our research is inherently "person centered" and thus collaborative. Let's reflect on what truly collaborative approaches to cancer nursing research could be like, and how we measure up to such goals. Collaboration between people affected by cancer (consumers) and nurses in research is much more than providing a voice for individuals as participants in a research study. Today, research governing bodies in many countries require us to seek a different kind of consumer participation, where consumers and researchers work in partnership with one another to shape decisions about research priorities, policies, and practices.1 Most granting bodies now require explanations of how consumer and community participation will occur within a study. Ethical imperatives and the concept of patient advocacy also require that we give more considered attention to what is meant by consumer involvement.2 Consumers provide perspective on what will be relevant, acceptable, feasible, and sensitive research, having lived the experience of cancer. As a result, they offer practical insights that can ensure the successful conduct and better outcomes from research. Some granting bodies now even allocate a proportion of final score or assign a "public value" weighting for a grant, to recognize the importance of consumer involvement and reflect the quality of patient involvement in all stages of the research process.3

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Accurately quantifying total freshwater storage methane release to atmosphere requires the spatial–temporal measurement of both diffusive and ebullitive emissions. Existing floating chamber techniques provide localised assessment of methane flux, however, significant errors can arise when weighting and extrapolation to the entire storage, particularly when ebullition is significant. An improved technique has been developed that compliments traditional chamber based experiments to quantify the storage-scale release of methane gas to atmosphere through ebullition using the measurements from an Optical Methane Detector (OMD) and a robotic boat. This provides a conservative estimate of the methane emission rate from ebullition along with the bubble volume distribution. It also georeferences the area of ebullition activity across entire storages at short temporal scales. An assessment on Little Nerang Dam in Queensland, Australia, demonstrated whole storage methane release significantly differed spatially and throughout the day. Total methane emission estimates showed a potential 32-fold variation in whole-of-dam rates depending on the measurement and extrapolation method and time of day used. The combined chamber and OMD technique showed that 1.8–7.0% of the surface area of Little Nerang Dam is accounting for up to 97% of total methane release to atmosphere throughout the day. Additionally, over 95% of detectable ebullition occurred in depths less than 12 m during the day and 6 m at night. This difference in spatial and temporal methane release rate distribution highlights the need to monitor significant regions of, if not the entire, water storage in order to provide an accurate estimate of ebullition rates and their contribution to annual methane emissions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study investigated how and to what degree “hybrid photography”—the simultaneous use of indexical and fictional properties and strategies— innovates the representation of animals within animalcentric, ecocentric frameworks. Design theory structured this project’s Practice-led, Visual research methodology framework. Grounded theory processes articulated emerging categories of hybrid photography through systematically and comparatively treating animal photography works for reflexive analysis. Design theory then applied and clarified categories, developing practice that re-visualised shark perspectives as new ecological discourse. Shadows, a creative practice installation, realised a full-scale photographic investigation into shark and marine animal realities of a specific environment—Heron Island and Gladstone, Great Barrier Reef—facing ecological crisis from dredging and development at Gladstone Harbour. Works rendered and explored hybrid photography’s capacity for illuminating nonhuman animals, in particular, sharks, and comprise 65% of this project’s weighting. This exegetical paper offers a definition, strategies and evaluation of hybrid photography in unsettling animal perspectives as effective ecological discourse, and comprises 35%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Physical and chemical properties of biodiesel are influenced by structural features of the fatty acids, such as chain length, degree of unsaturation and branching of the carbon chain. This study investigated if microalgal fatty acid profiles are suitable for biodiesel characterization and species selection through Preference Ranking Organisation Method for Enrichment Evaluation (PROMETHEE) and Graphical Analysis for Interactive Assistance (GAIA) analysis. Fatty acid methyl ester (FAME) profiles were used to calculate the likely key chemical and physical properties of the biodiesel [cetane number (CN), iodine value (IV), cold filter plugging point, density, kinematic viscosity, higher heating value] of nine microalgal species (this study) and twelve species from the literature, selected for their suitability for cultivation in subtropical climates. An equal-parameter weighted (PROMETHEE-GAIA) ranked Nannochloropsis oculata, Extubocellulus sp. and Biddulphia sp. highest; the only species meeting the EN14214 and ASTM D6751-02 biodiesel standards, except for the double bond limit in the EN14214. Chlorella vulgaris outranked N. oculata when the twelve microalgae were included. Culture growth phase (stationary) and, to a lesser extent, nutrient provision affected CN and IV values of N. oculata due to lower eicosapentaenoic acid (EPA) contents. Application of a polyunsaturated fatty acid (PUFA) weighting to saturation led to a lower ranking of species exceeding the double bond EN14214 thresholds. In summary, CN, IV, C18:3 and double bond limits were the strongest drivers in equal biodiesel parameter-weighted PROMETHEE analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The continuous growth of the XML data poses a great concern in the area of XML data management. The need for processing large amounts of XML data brings complications to many applications, such as information retrieval, data integration and many others. One way of simplifying this problem is to break the massive amount of data into smaller groups by application of clustering techniques. However, XML clustering is an intricate task that may involve the processing of both the structure and the content of XML data in order to identify similar XML data. This research presents four clustering methods, two methods utilizing the structure of XML documents and the other two utilizing both the structure and the content. The two structural clustering methods have different data models. One is based on a path model and other is based on a tree model. These methods employ rigid similarity measures which aim to identifying corresponding elements between documents with different or similar underlying structure. The two clustering methods that utilize both the structural and content information vary in terms of how the structure and content similarity are combined. One clustering method calculates the document similarity by using a linear weighting combination strategy of structure and content similarities. The content similarity in this clustering method is based on a semantic kernel. The other method calculates the distance between documents by a non-linear combination of the structure and content of XML documents using a semantic kernel. Empirical analysis shows that the structure-only clustering method based on the tree model is more scalable than the structure-only clustering method based on the path model as the tree similarity measure for the tree model does not need to visit the parents of an element many times. Experimental results also show that the clustering methods perform better with the inclusion of the content information on most test document collections. To further the research, the structural clustering method based on tree model is extended and employed in XML transformation. The results from the experiments show that the proposed transformation process is faster than the traditional transformation system that translates and converts the source XML documents sequentially. Also, the schema matching process of XML transformation produces a better matching result in a shorter time.