837 resultados para real-life research
Resumo:
In many advanced applications, data are described by multiple high-dimensional features. Moreover, different queries may weight these features differently; some may not even specify all the features. In this paper, we propose our solution to support efficient query processing in these applications. We devise a novel representation that compactly captures f features into two components: The first component is a 2D vector that reflects a distance range ( minimum and maximum values) of the f features with respect to a reference point ( the center of the space) in a metric space and the second component is a bit signature, with two bits per dimension, obtained by analyzing each feature's descending energy histogram. This representation enables two levels of filtering: The first component prunes away points that do not share similar distance ranges, while the bit signature filters away points based on the dimensions of the relevant features. Moreover, the representation facilitates the use of a single index structure to further speed up processing. We employ the classical B+-tree for this purpose. We also propose a KNN search algorithm that exploits the access orders of critical dimensions of highly selective features and partial distances to prune the search space more effectively. Our extensive experiments on both real-life and synthetic data sets show that the proposed solution offers significant performance advantages over sequential scan and retrieval methods using single and multiple VA-files.
Resumo:
This paper presents a new multi-depot combined vehicle and crew scheduling algorithm, and uses it, in conjunction with a heuristic vehicle routing algorithm, to solve the intra-city mail distribution problem faced by Australia Post. First we describe the Australia Post mail distribution problem and outline the heuristic vehicle routing algorithm used to find vehicle routes. We present a new multi-depot combined vehicle and crew scheduling algorithm based on set covering with column generation. The paper concludes with a computational investigation examining the affect of different types of vehicle routing solutions on the vehicle and crew scheduling solution, comparing the different levels of integration possible with the new vehicle and crew scheduling algorithm and comparing the results of sequential versus simultaneous vehicle and crew scheduling, using real life data for Australia Post distribution networks.
Resumo:
It has been demonstrated, using abstract psychophysical stimuli, that speeds appear slower when contrast is reduced under certain conditions. Does this effect have any real life consequences? One previous study has found, using a low fidelity driving simulator, that participants perceived vehicle speeds to be slower in foggy conditions. We replicated this finding with a more realistic video-based simulator using the Method of Constant Stimuli. We also found that lowering contrast reduced participants’ ability to discriminate speeds. We argue that these reduced contrast effects could partly explain the higher crash rate of drivers with cataracts (this is a substantial societal problem and the crash relationship variance can be accounted for by reduced contrast). Note that even if people with cataracts can calibrate for the shift in their perception of speed using their speedometers (given that cataracts are experienced over long periods), they may still have an increased chance of making errors in speed estimation due to poor speed discrimination. This could result in individuals misjudging vehicle trajectories and thereby inflating their crash risk. We propose interventions that may help address this problem.
Resumo:
Terrain can be approximated by a triangular mesh consisting millions of 3D points. Multiresolution triangular mesh (MTM) structures are designed to support applications that use terrain data at variable levels of detail (LOD). Typically, an MTM adopts a tree structure where a parent node represents a lower-resolution approximation of its descendants. Given a region of interest (ROI) and a LOD, the process of retrieving the required terrain data from the database is to traverse the MTM tree from the root to reach all the nodes satisfying the ROI and LOD conditions. This process, while being commonly used for multiresolution terrain visualization, is inefficient as either a large number of sequential I/O operations or fetching a large amount of extraneous data is incurred. Various spatial indexes have been proposed in the past to address this problem, however level-by-level tree traversal remains a common practice in order to obtain topological information among the retrieved terrain data. A new MTM data structure called direct mesh is proposed. We demonstrate that with direct mesh the amount of data retrieval can be substantially reduced. Comparing with existing MTM indexing methods, a significant performance improvement has been observed for real-life terrain data.
Resumo:
There has been a greater emphasis over the past few years of encouraging high school students to take up engineering as a career. This is due to a greater need for engineers in society, particularly in areas that are suffering a skills shortage. Both the engineering profession and universities across Australia have moved to address this shortage, with a proliferation of engineering outreach activities and programs the result. The Engineering Link Group (TELG) began the Engineering Link Project (ELP) over a decade ago with a focus on helping motivated high school students make an informed choice about engineering as a career. It also aimed at encouraging more high school students to study maths and science at high school. From the start the ELP was designed so that the students became engineers, rather than just hear from or watch engineers. Real working engineers pose problems to groups of students for them solve over the course of a day. In this way, students experience what it is like to be an engineer. It has been found that the project does help high school students make more informed career choices about engineering. The project also gave the students real life and practical reasons for studying sciences and mathematics at high school. © 2005, Australasian Association for Engineering Education
Resumo:
In multimedia retrieval, a query is typically interactively refined towards the ‘optimal’ answers by exploiting user feedback. However, in existing work, in each iteration, the refined query is re-evaluated. This is not only inefficient but fails to exploit the answers that may be common between iterations. In this paper, we introduce a new approach called SaveRF (Save random accesses in Relevance Feedback) for iterative relevance feedback search. SaveRF predicts the potential candidates for the next iteration and maintains this small set for efficient sequential scan. By doing so, repeated candidate accesses can be saved, hence reducing the number of random accesses. In addition, efficient scan on the overlap before the search starts also tightens the search space with smaller pruning radius. We implemented SaveRF and our experimental study on real life data sets show that it can reduce the I/O cost significantly.
Resumo:
A marca Mercedes-Benz é uma marca legendária e está presente no imaginário do consumidor quando o assunto é automóvel. A história dessa importante indústria automobilística ultrapassa os 100 anos (1902-2008) e a cada dia impressiona ainda mais com novos modelos, sinônimo de tecnologia, qualidade, segurança e luxo. Este trabalho visa analisar e compreender o processo de comunicação da marca Mercedes-Benz, que surgiu em 1902 (como Mercedes), depois como Mercedes-Benz (1926), DaimlerChyrsler (1998), atualmente como Daimler AG (matriz Alemanha) e Mercedes-Benz do Brasil Ltda., e que permanece fortalecida e prestigiada no seu segmento, de acordo com os números e informações apresentadas, ratificadas por profissionais do setor automotivo e especialistas em luxo. Faz parte deste estudo abordar a forma e o significado da marca, representada por sua logomarca em toda a sua trajetória: esboço inicial, mudanças e desenho atual (parte gráfica) bem como as mensagens transmitidas e comunicação com o mercado. A metodologia consiste em realizar um levantamento bibliográfico por meio de: publicações, periódicos, documentos internos, dissertações e teses, que contenham informações sobre a marca Mercedes-Benz desde o seu momento inicial até o atual, visando estabelecer uma comparação entre os períodos, sobretudo no cenário nacional. O trabalho irá tratar o assunto como um estudo de caso, por se tratar de uma análise organizacional e gerencial, que compreendem fenômenos contemporâneos inseridos na vida real. O estudo tem como referência a Comunicação Integrada de Marketing, como linha de pesquisa e a Comunicação Especializada, como área de concentração.(AU)
Resumo:
Mixture Density Networks are a principled method to model conditional probability density functions which are non-Gaussian. This is achieved by modelling the conditional distribution for each pattern with a Gaussian Mixture Model for which the parameters are generated by a neural network. This thesis presents a novel method to introduce regularisation in this context for the special case where the mean and variance of the spherical Gaussian Kernels in the mixtures are fixed to predetermined values. Guidelines for how these parameters can be initialised are given, and it is shown how to apply the evidence framework to mixture density networks to achieve regularisation. This also provides an objective stopping criteria that can replace the `early stopping' methods that have previously been used. If the neural network used is an RBF network with fixed centres this opens up new opportunities for improved initialisation of the network weights, which are exploited to start training relatively close to the optimum. The new method is demonstrated on two data sets. The first is a simple synthetic data set while the second is a real life data set, namely satellite scatterometer data used to infer the wind speed and wind direction near the ocean surface. For both data sets the regularisation method performs well in comparison with earlier published results. Ideas on how the constraint on the kernels may be relaxed to allow fully adaptable kernels are presented.
Resumo:
Data visualization algorithms and feature selection techniques are both widely used in bioinformatics but as distinct analytical approaches. Until now there has been no method of measuring feature saliency while training a data visualization model. We derive a generative topographic mapping (GTM) based data visualization approach which estimates feature saliency simultaneously with the training of the visualization model. The approach not only provides a better projection by modeling irrelevant features with a separate noise model but also gives feature saliency values which help the user to assess the significance of each feature. We compare the quality of projection obtained using the new approach with the projections from traditional GTM and self-organizing maps (SOM) algorithms. The results obtained on a synthetic and a real-life chemoinformatics dataset demonstrate that the proposed approach successfully identifies feature significance and provides coherent (compact) projections. © 2006 IEEE.
Resumo:
thesis is developed from a real life application of performance evaluation of small and medium-sized enterprises (SMEs) in Vietnam. The thesis presents two main methodological developments on evaluation of dichotomous environment variable impacts on technical efficiency. Taking into account the selection bias the thesis proposes a revised frontier separation approach for the seminal Data Envelopment Analysis (DEA) model which was developed by Charnes, Cooper, and Rhodes (1981). The revised frontier separation approach is based on a nearest neighbour propensity score matching pairing treated SMEs with their counterfactuals on the propensity score. The thesis develops order-m frontier conditioning on propensity score from the conditional order-m approach proposed by Cazals, Florens, and Simar (2002), advocated by Daraio and Simar (2005). By this development, the thesis allows the application of the conditional order-m approach with a dichotomous environment variable taking into account the existence of the self-selection problem of impact evaluation. Monte Carlo style simulations have been built to examine the effectiveness of the aforementioned developments. Methodological developments of the thesis are applied in empirical studies to evaluate the impact of training programmes on the performance of food processing SMEs and the impact of exporting on technical efficiency of textile and garment SMEs of Vietnam. The analysis shows that training programmes have no significant impact on the technical efficiency of food processing SMEs. Moreover, the analysis confirms the conclusion of the export literature that exporters are self selected into the sector. The thesis finds no significant impact from exporting activities on technical efficiency of textile and garment SMEs. However, large bias has been eliminated by the proposed approach. Results of empirical studies contribute to the understanding of the impact of different environmental variables on the performance of SMEs. It helps policy makers to design proper policy supporting the development of Vietnamese SMEs.
Resumo:
Offshore oil and gas pipelines are vulnerable to environment as any leak and burst in pipelines cause oil/gas spill resulting in huge negative Impacts on marine lives. Breakdown maintenance of these pipelines is also cost-intensive and time-consuming resulting in huge tangible and intangible loss to the pipeline operators. Pipelines health monitoring and integrity analysis have been researched a lot for successful pipeline operations and risk-based maintenance model is one of the outcomes of those researches. This study develops a risk-based maintenance model using a combined multiple-criteria decision-making and weight method for offshore oil and gas pipelines in Thailand with the active participation of experienced executives. The model's effectiveness has been demonstrated through real life application on oil and gas pipelines in the Gulf of Thailand. Practical implications. Risk-based inspection and maintenance methodology is particularly important for oil pipelines system, as any failure in the system will not only affect productivity negatively but also has tremendous negative environmental impact. The proposed model helps the pipelines operators to analyze the health of pipelines dynamically, to select specific inspection and maintenance method for specific section in line with its probability and severity of failure.
Resumo:
Mixture Density Networks are a principled method to model conditional probability density functions which are non-Gaussian. This is achieved by modelling the conditional distribution for each pattern with a Gaussian Mixture Model for which the parameters are generated by a neural network. This thesis presents a novel method to introduce regularisation in this context for the special case where the mean and variance of the spherical Gaussian Kernels in the mixtures are fixed to predetermined values. Guidelines for how these parameters can be initialised are given, and it is shown how to apply the evidence framework to mixture density networks to achieve regularisation. This also provides an objective stopping criteria that can replace the `early stopping' methods that have previously been used. If the neural network used is an RBF network with fixed centres this opens up new opportunities for improved initialisation of the network weights, which are exploited to start training relatively close to the optimum. The new method is demonstrated on two data sets. The first is a simple synthetic data set while the second is a real life data set, namely satellite scatterometer data used to infer the wind speed and wind direction near the ocean surface. For both data sets the regularisation method performs well in comparison with earlier published results. Ideas on how the constraint on the kernels may be relaxed to allow fully adaptable kernels are presented.
Resumo:
Aim: To investigate the correlation between tests of visual function and perceived visual ability recorded with a quality of life questionnaire for patients with uveitis. Methods: 132 patients with various types of uveitis were studied. High (monocular and binocular) and low (binocular) contrast logMAR letter acuities were recorded using a Bailey-Lovie chart. Contrast sensitivity (binocular) was determined using a Pelli-Robson chart. Vision related quality of life was assessed using the Vision Specific Quality of Life (VQOL) questionnaire. Results: VQOL declined with reduced performance on the following tests: binocular high contrast visual acuity (p = 0.0011), high contrast visual acuity of the better eye (p = 0.0012), contrast sensitivity (p = 0.005), binocular low contrast visual acuity (p = 0.0065), and high contrast visual acuity of the worse eye (p = 0.015). Stepwise multiple regression analysis revealed binocular high contrast visual acuity (p <0.01) to be the only visual function adequate to predict VQOL. The age of the patient was also significantly associated with perceived visual ability (p <0.001). Conclusions: Binocular high contrast visual acuity is a good measure of how uveitis patients perform in real life situations. Vision quality of life is worst in younger patients with poor binocular visual acuity.
Resumo:
Urine proteomics is emerging as a powerful tool for biomarker discovery. The purpose of this study is the development of a well-characterized "real life" sample that can be used as reference standard in urine clinical proteomics studies.
Resumo:
This article examines the negotiation of face in post observation feedback conferences on an initial teacher training programme. The conferences were held in groups with one trainer and up to four trainees and followed a set of generic norms. These norms include the right to offer advice and to criticise, speech acts which are often considered to be face threatening in more normal contexts. However, as the data analysis shows, participants also interact in ways that challenge the generic norms, some of which might be considered more conventionally face attacking. The article argues that face should be analysed at the level of interaction (Haugh and Bargiela-Chiappini, 2010) and that situated and contextual detail is relevant to its analysis. It suggests that linguistic ethnography, which 'marries' (Wetherell, 2007) linguistics and ethnography, provides a useful theoretical framework for doing so. To this end the study draws on real-life talk-in-interaction (from transcribed recordings), the participants' perspectives (from focus groups and interviews) and situated detail (from fieldnotes) to produce a contextualised and nuanced analysis. © 2011 Elsevier B.V.