992 resultados para metric access methods


Relevância:

40.00% 40.00%

Publicador:

Resumo:

A RkNN query returns all objects whose nearest k neighbors
contain the query object. In this paper, we consider RkNN
query processing in the case where the distances between
attribute values are not necessarily metric. Dissimilarities
between objects could then be a monotonic aggregate of dissimilarities
between their values, such aggregation functions
being specified at query time. We outline real world cases
that motivate RkNN processing in such scenarios. We consider
the AL-Tree index and its applicability in RkNN query
processing. We develop an approach that exploits the group
level reasoning enabled by the AL-Tree in RkNN processing.
We evaluate our approach against a Naive approach
that performs sequential scans on contiguous data and an
improved block-based approach that we provide. We use
real-world datasets and synthetic data with varying characteristics
for our experiments. This extensive empirical
evaluation shows that our approach is better than existing
methods in terms of computational and disk access costs,
leading to significantly better response times.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We consider quasi-Newton methods for generalized equations in Banach spaces under metric regularity and give a sufficient condition for q-linear convergence. Then we show that the well-known Broyden update satisfies this sufficient condition in Hilbert spaces. We also establish various modes of q-superlinear convergence of the Broyden update under strong metric subregularity, metric regularity and strong metric regularity. In particular, we show that the Broyden update applied to a generalized equation in Hilbert spaces satisfies the Dennis–Moré condition for q-superlinear convergence. Simple numerical examples illustrate the results.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

"This edition is practically unchanged from the 10th ed., pub. in 1908."

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research is focused on the optimisation of resource utilisation in wireless mobile networks with the consideration of the users’ experienced quality of video streaming services. The study specifically considers the new generation of mobile communication networks, i.e. 4G-LTE, as the main research context. The background study provides an overview of the main properties of the relevant technologies investigated. These include video streaming protocols and networks, video service quality assessment methods, the infrastructure and related functionalities of LTE, and resource allocation algorithms in mobile communication systems. A mathematical model based on an objective and no-reference quality assessment metric for video streaming, namely Pause Intensity, is developed in this work for the evaluation of the continuity of streaming services. The analytical model is verified by extensive simulation and subjective testing on the joint impairment effects of the pause duration and pause frequency. Various types of the video contents and different levels of the impairments have been used in the process of validation tests. It has been shown that Pause Intensity is closely correlated with the subjective quality measurement in terms of the Mean Opinion Score and this correlation property is content independent. Based on the Pause Intensity metric, an optimised resource allocation approach is proposed for the given user requirements, communication system specifications and network performances. This approach concerns both system efficiency and fairness when establishing appropriate resource allocation algorithms, together with the consideration of the correlation between the required and allocated data rates per user. Pause Intensity plays a key role here, representing the required level of Quality of Experience (QoE) to ensure the best balance between system efficiency and fairness. The 3GPP Long Term Evolution (LTE) system is used as the main application environment where the proposed research framework is examined and the results are compared with existing scheduling methods on the achievable fairness, efficiency and correlation. Adaptive video streaming technologies are also investigated and combined with our initiatives on determining the distribution of QoE performance across the network. The resulting scheduling process is controlled through the prioritization of users by considering their perceived quality for the services received. Meanwhile, a trade-off between fairness and efficiency is maintained through an online adjustment of the scheduler’s parameters. Furthermore, Pause Intensity is applied to act as a regulator to realise the rate adaptation function during the end user’s playback of the adaptive streaming service. The adaptive rates under various channel conditions and the shape of the QoE distribution amongst the users for different scheduling policies have been demonstrated in the context of LTE. Finally, the work for interworking between mobile communication system at the macro-cell level and the different deployments of WiFi technologies throughout the macro-cell is presented. A QoEdriven approach is proposed to analyse the offloading mechanism of the user’s data (e.g. video traffic) while the new rate distribution algorithm reshapes the network capacity across the macrocell. The scheduling policy derived is used to regulate the performance of the resource allocation across the fair-efficient spectrum. The associated offloading mechanism can properly control the number of the users within the coverages of the macro-cell base station and each of the WiFi access points involved. The performance of the non-seamless and user-controlled mobile traffic offloading (through the mobile WiFi devices) has been evaluated and compared with that of the standard operator-controlled WiFi hotspots.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Although mitigating GHG emissions is necessary to reduce the overall negative climate change impacts on crop yields and agricultural production, certain mitigation measures may generate unintended consequences to food availability and access due to land use competition and economic burden of mitigation. Prior studies have examined the co-impacts on food availability and global producer prices caused by alternative climate policies. More recent studies have looked at the reduction in total caloric intake driven by both changing income and changing food prices under one specific climate policy. However, due to inelastic calorie demand, consumers’ well-being are likely further reduced by increased food expenditures. Built upon existing literature, my dissertation explores how alternative climate policy designs might adversely affect both caloric intake and staple food budget share to 2050, by using the Global Change Assessment Model (GCAM) and a post-estimated metric of food availability and access (FAA). My dissertation first develop a set of new metrics and methods to explore new perspectives of food availability and access under new conditions. The FAA metric consists of two components, the fraction of GDP per capita spent on five categories of staple food and total caloric intake relative to a reference level. By testing the metric against alternate expectations of the future, it shows consistent results with previous studies that economic growth dominates the improvement of FAA. As we increase our ambition to achieve stringent climate targets, two policy conditions tend to have large impacts on FAA driven by competing land use and increasing food prices. Strict conservation policies leave the competition between bioenergy and agriculture production on existing commercial land, while pricing terrestrial carbon encourages large-scale afforestation. To avoid unintended outcomes to food availability and access for the poor, pricing land emissions in frontier forests has the advantage of selecting more productive land for agricultural activities compared to the full conservation approach, but the land carbon price should not be linked to the price of energy system emissions. These results are highly relevant to effective policy-making to reduce land use change emissions, such as the Reduced Emissions from Deforestation and Forest Degradation (REDD).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Increasing in resolution of numerical weather prediction models has allowed more and more realistic forecasts of atmospheric parameters. Due to the growing variability into predicted fields the traditional verification methods are not always able to describe the model ability because they are based on a grid-point-by-grid-point matching between observation and prediction. Recently, new spatial verification methods have been developed with the aim of show the benefit associated to the high resolution forecast. Nested in among of the MesoVICT international project, the initially aim of this work is to compare the newly tecniques remarking advantages and disadvantages. First of all, the MesoVICT basic examples, represented by synthetic precipitation fields, have been examined. Giving an error evaluation in terms of structure, amplitude and localization of the precipitation fields, the SAL method has been studied more thoroughly respect to the others approaches with its implementation in the core cases of the project. The verification procedure has concerned precipitation fields over central Europe: comparisons between the forecasts performed by the 00z COSMO-2 model and the VERA (Vienna Enhanced Resolution Analysis) have been done. The study of these cases has shown some weaknesses of the methodology examined; in particular has been highlighted the presence of a correlation between the optimal domain size and the extention of the precipitation systems. In order to increase ability of SAL, a subdivision of the original domain in three subdomains has been done and the method has been applied again. Some limits have been found in cases in which at least one of the two domains does not show precipitation. The overall results for the subdomains have been summarized on scatter plots. With the aim to identify systematic errors of the model the variability of the three parameters has been studied for each subdomain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study, in its exploration of the attached play scripts and their method of development, evaluates the forms, strategies, and methods of an organised model of formalised playwriting. Through the examination, reflection and reaction to a perceived crisis in playwriting in the Australian theatre sector, the notion of Industrial Playwriting is arrived at: a practice whereby plays are designed and constructed, and where the process of writing becomes central to the efficient creation of new work and the improvement of the writer’s skill and knowledge base. Using a practice-led methodology and action research the study examines a system of play construction appropriate to and addressing the challenges of the contemporary Australian theatre sector. Specifically, using the action research methodology known as design-based research a conceptual framework was constructed to form the basis of the notion of Industrial Playwriting. From this two plays were constructed using a case study method and the process recorded and used to create a practical, step-by-step system of Industrial Playwriting. In the creative practice of manufacturing a single authored play, and then a group-devised play, Industrial Playwriting was tested and found to also offer a valid alternative approach to playwriting in the training of new and even emerging playwrights. Finally, it offered insight into how Industrial Playwriting could be used to greatly facilitate theatre companies’ ongoing need to have access to new writers and new Australian works, and how it might form the basis of a cost effective writer development model. This study of the methods of formalised writing as a means to confront some of the challenges of the Australian theatre sector, the practice of playwriting and the history associated with it, makes an original and important contribution to contemporary playwriting practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports on Years 8, 9 and 10 students’ knowledge of percent problem types, use of diagrams, and type of solution strategy. Non- and semi-proficient students displayed the expected inflexible formula approach to solution but proficient students used a flexible mixture of estimation, number sense and trial and error instead of expected schema based methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic load sharing can be defined as a measure of the ability of a heavy vehicle multi-axle group to equalise load across its wheels under typical travel conditions; i.e. in the dynamic sense at typical travel speeds and operating conditions of that vehicle. Various attempts have been made to quantify the ability of heavy vehicles to equalise the load across their wheels during travel. One of these was the concept of the load sharing coefficient (LSC). Other metrics such as the dynamic load coefficient (DLC) have been used to compare one heavy vehicle suspension with another for potential road damage. This paper compares these metrics and determines a relationship between DLC and LSC with sensitivity analysis of this relationship. The shortcomings of these presently-available metrics are discussed with a new metric proposed - the dynamic load equalisation (DLE) measure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In open railway access markets, a train service provider (TSP) negotiates with an infrastructure provider (IP) for track access rights. This negotiation has been modeled by a multi-agent system (MAS) in which the IP and TSP are represented by separate software agents. One task of the IP agent is to generate feasible (and preferably optimal) track access rights, subject to the constraints submitted by the TSP agent. This paper formulates an IP-TSP transaction and proposes a branch-and-bound algorithm for the IP agent to identify the optimal track access rights. Empirical simulation results show that the model is able to emulate rational agent behaviors. The simulation results also show good consistency between timetables attained from the proposed methods and those derived by the scheduling principles adopted in practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: Since the introduction of its QUT ePrints institutional repository of published research outputs, together with the world’s first mandate for author contributions to an institutional repository, Queensland University of Technology (QUT) has been a leader in support of green road open access. With QUT ePrints providing our mechanism for supporting the green road to open access, QUT has since then also continued to expand its secondary open access strategy supporting gold road open access, which is also designed to assist QUT researchers to maximise the accessibility and so impact of their research. ---------- METHODS: QUT Library has adopted the position of selectively supporting true gold road open access publishing by using the Library Resource Allocation budget to pay the author publication fees for QUT authors wishing to publish in the open access journals of a range of publishers including BioMed Central, Public Library of Science and Hindawi. QUT Library has been careful to support only true open access publishers and not those open access publishers with hybrid models which “double dip” by charging authors publication fees and libraries subscription fees for the same journal content. QUT Library has maintained a watch on the growing number of open access journals available from gold road open access publishers and their increased rate of success as measured by publication impact. ---------- RESULTS: This paper reports on the successes and challenges of QUT’s efforts to support true gold road open access publishers and promote these publishing strategy options to researchers at QUT. The number and spread of QUT papers submitted and published in the journals of each publisher is provided. Citation counts for papers and authors are also presented and analysed, with the intention of identifying the benefits to accessibility and research impact for early career and established researchers.---------- CONCLUSIONS: QUT Library is eager to continue and further develop support for this publishing strategy, and makes a number of recommendations to other research institutions, on how they can best achieve success with this strategy.