950 resultados para Tuning.
Resumo:
Data structures such as k-D trees and hierarchical k-means trees perform very well in approximate k nearest neighbour matching, but are only marginally more effective than linear search when performing exact matching in high-dimensional image descriptor data. This paper presents several improvements to linear search that allows it to outperform existing methods and recommends two approaches to exact matching. The first method reduces the number of operations by evaluating the distance measure in order of significance of the query dimensions and terminating when the partial distance exceeds the search threshold. This method does not require preprocessing and significantly outperforms existing methods. The second method improves query speed further by presorting the data using a data structure called d-D sort. The order information is used as a priority queue to reduce the time taken to find the exact match and to restrict the range of data searched. Construction of the d-D sort structure is very simple to implement, does not require any parameter tuning, and requires significantly less time than the best-performing tree structure, and data can be added to the structure relatively efficiently.
Resumo:
Teachers of construction economics and estimating have for a long time recognised that there is more to construction pricing than detailed calculation of costs (to the contractor). We always get to the point where we have to say "of course, experience or familiarity of the market is very important and this needs judgement, intuition, etc". Quite how important is the matter in construction pricing is not known and we tend to trivialise its effect. If judgement of the market has a minimal effect, little harm would be done, but if it is really important then some quite serious consequences arise which go well beyond the teaching environment. Major areas of concern for the quantity surveyor are in cost modelling and cost planning - neither of which pay any significant attention to the market effect. There are currently two schools of thought about the market effect issue. The first school is prepared to ignore possible effects until more is known. This may be called the pragmatic school. The second school exists solely to criticise the first school. We will call this the antagonistic school. Neither the pragmatic nor the antagonistic schools seem to be particularly keen to resolve the issue one way or the other. The founder and leader of the antagonistic school is Brian Fine whose paper in 1974 is still the basic text on the subject, and in which he coined the term 'socially acceptable' price to describe what we now recognise as the market effect. Mr Fine's argument was then, and is since, that the uncertainty surrounding the contractors' costing and cost estimating process is such that the uncertainty surrounding the contractors' cost that it logically leads to a market-orientated pricing approach. Very little factual evidence, however, seems to be available to support these arguments in any conclusive manner. A further, and more important point for the pragmatic school, is that, even if the market effect is as important as Mr Fine believes, there are no indications of how it can be measured, evaluated or predicted. Since 1974 evidence has been accumulating which tends to reinforce the antagonists' view. A review of the literature covering both contractors' and designers' estimates found many references to the use of value judgements in construction pricing (Ashworth & Skitmore, 1985), which supports the antagonistic view in implying the existence of uncertainty overload. The most convincing evidence emerged quite by accident in some research we recently completed with practicing quantity surveyors in estimating accuracy (Skitmore, 1985). In addition to demonstrating that individual quantity surveyors and certain types of buildings had significant effect on estimating accuracy, one surprise result was that only a very small amount of information was used by the most expert surveyors for relatively very accurate estimates. Only the type and size of building, it seemed, was really relevant in determining accuracy. More detailed information about the buildings' specification, and even a sight to the drawings, did not significantly improve their accuracy level. This seemed to offer clear evidence that the constructional aspects of the project were largely irrelevant and that the expert surveyors were somehow tuning in to the market price of the building. The obvious next step is to feed our expert surveyors with more relevant 'market' information in order to assess its effect. The problem with this is that our experts do not seem able to verbalise their requirements in this respect - a common occurrence in research of this nature. The lack of research into the nature of market effects on prices also means the literature provides little of benefit. Hence the need for this study. It was felt that a clearer picture of the nature of construction markets would be obtained in an environment where free enterprise was a truly ideological force. For this reason, the United States of America was chosen for the next stage of our investigations. Several people were interviewed in an informal and unstructured manner to elicit their views on the action of market forces on construction prices. Although a small number of people were involved, they were thought to be reasonably representative of knowledge in construction pricing. They were also very well able to articulate their views. Our initial reaction to the interviews was that our USA subjects held very close views to those held in the UK. However, detailed analysis revealed the existence of remarkably clear and consistent insights that would not have been obtained in the UK. Further evidence was also obtained from literature relating to the subject and some of the interviewees very kindly expanded on their views in later postal correspondence. We have now analysed all the evidence received and, although a great deal is of an anecdotal nature, we feel that our findings enable at least the basic nature of the subject to be understood and that the factors and their interrelationships can now be examined more formally in relation to construction price levels. I must express my gratitude to the Royal Institution of Chartered Surveyors' Educational Trust and the University of Salford's Department of Civil Engineering for collectively funding this study. My sincere thanks also go to our American participants who freely gave their time and valuable knowledge to us in our enquiries. Finally, I must record my thanks to Tim and Anne for their remarkable ability to produce an intelligible typescript from my unintelligible writing.
Resumo:
The secretion of cytokines by immune cells plays a significant role in determining the course of an inflammatory response. The levels and timing of each cytokine released are critical for mounting an effective but confined response, whereas excessive or dysregulated inflammation contributes to many diseases. Cytokines are both culprits and targets for effective treatments in some diseases. The multiple points and mechanisms that have evolved for cellular control of cytokine secretion highlight the potency of these mediators and the fine tuning required to manage inflammation. Cytokine production in cells is regulated by cell signaling, and at mRNA and protein synthesis levels. Thereafter, the intracellular transport pathways and molecular trafficking machinery have intricate and essential roles in dictating the release and activity of cytokines. The trafficking machinery and secretory (exocytic) pathways are complex and highly regulated in many cells, involving specialized membranes, molecules and organelles that enable these cells to deliver cytokines to often-distinct areas of the cell surface, in a timely manner. This review provides an overview of secretory pathways - both conventional and unconventional - and key families of trafficking machinery. The prevailing knowledge about the trafficking and secretion of a number of individual cytokines is also summarized. In conclusion, we present emerging concepts about the functional plasticity of secretory pathways and their modulation for controlling cytokines and inflammation.
Resumo:
Mixtures of single odours were used to explore the receptor response profile across individual antennae of Helicoverpa armigera (Hübner) (Lepidoptera: Noctuidae). Seven odours were tested including floral and green-leaf volatiles: phenyl acetaldehyde, benzaldehyde, β-caryophyllene, limonene, α-pinene, 1-hexanol, 3Z-hexenyl acetate. Electroantennograms of responses to paired mixtures of odours showed that there was considerable variation in receptor tuning across the receptor field between individuals. Data from some moth antennae showed no additivity, which indicated a restricted receptor profile. Results from other moth antennae to the same odour mixtures showed a range of partial additivity. This indicated that a wider array of receptor types was present in these moths, with a greater percentage of the receptors tuned exclusively to each odour. Peripheral receptor fields show variation in the spectrum of response within a population (of moths) when exposed to high doses of plant volatiles. This may be related to recorded variation in host choice within moth populations as reported by other authors.
Resumo:
In a classification problem typically we face two challenging issues, the diverse characteristic of negative documents and sometimes a lot of negative documents that are closed to positive documents. Therefore, it is hard for a single classifier to clearly classify incoming documents into classes. This paper proposes a novel gradual problem solving to create a two-stage classifier. The first stage identifies reliable negatives (negative documents with weak positive characteristics). It concentrates on minimizing the number of false negative documents (recall-oriented). We use Rocchio, an existing recall based classifier, for this stage. The second stage is a precision-oriented “fine tuning, concentrates on minimizing the number of false positive documents by applying pattern (a statistical phrase) mining techniques. In this stage a pattern-based scoring is followed by threshold setting (thresholding). Experiment shows that our statistical phrase based two-stage classifier is promising.
Resumo:
The performance of visual speech recognition (VSR) systems are significantly influenced by the accuracy of the visual front-end. The current state-of-the-art VSR systems use off-the-shelf face detectors such as Viola- Jones (VJ) which has limited reliability for changes in illumination and head poses. For a VSR system to perform well under these conditions, an accurate visual front end is required. This is an important problem to be solved in many practical implementations of audio visual speech recognition systems, for example in automotive environments for an efficient human-vehicle computer interface. In this paper, we re-examine the current state-of-the-art VSR by comparing off-the-shelf face detectors with the recently developed Fourier Lucas-Kanade (FLK) image alignment technique. A variety of image alignment and visual speech recognition experiments are performed on a clean dataset as well as with a challenging automotive audio-visual speech dataset. Our results indicate that the FLK image alignment technique can significantly outperform off-the shelf face detectors, but requires frequent fine-tuning.
Resumo:
Reliability of the performance of biometric identity verification systems remains a significant challenge. Individual biometric samples of the same person (identity class) are not identical at each presentation and performance degradation arises from intra-class variability and inter-class similarity. These limitations lead to false accepts and false rejects that are dependent. It is therefore difficult to reduce the rate of one type of error without increasing the other. The focus of this dissertation is to investigate a method based on classifier fusion techniques to better control the trade-off between the verification errors using text-dependent speaker verification as the test platform. A sequential classifier fusion architecture that integrates multi-instance and multisample fusion schemes is proposed. This fusion method enables a controlled trade-off between false alarms and false rejects. For statistically independent classifier decisions, analytical expressions for each type of verification error are derived using base classifier performances. As this assumption may not be always valid, these expressions are modified to incorporate the correlation between statistically dependent decisions from clients and impostors. The architecture is empirically evaluated by applying the proposed architecture for text dependent speaker verification using the Hidden Markov Model based digit dependent speaker models in each stage with multiple attempts for each digit utterance. The trade-off between the verification errors is controlled using the parameters, number of decision stages (instances) and the number of attempts at each decision stage (samples), fine-tuned on evaluation/tune set. The statistical validation of the derived expressions for error estimates is evaluated on test data. The performance of the sequential method is further demonstrated to depend on the order of the combination of digits (instances) and the nature of repetitive attempts (samples). The false rejection and false acceptance rates for proposed fusion are estimated using the base classifier performances, the variance in correlation between classifier decisions and the sequence of classifiers with favourable dependence selected using the 'Sequential Error Ratio' criteria. The error rates are better estimated by incorporating user-dependent (such as speaker-dependent thresholds and speaker-specific digit combinations) and class-dependent (such as clientimpostor dependent favourable combinations and class-error based threshold estimation) information. The proposed architecture is desirable in most of the speaker verification applications such as remote authentication, telephone and internet shopping applications. The tuning of parameters - the number of instances and samples - serve both the security and user convenience requirements of speaker-specific verification. The architecture investigated here is applicable to verification using other biometric modalities such as handwriting, fingerprints and key strokes.
Resumo:
This study explored how Korean men married to migrant women construct meaning around married life. Data were collected through in-depth interviews with 10 men who had had been married to migrant women for ≥ 2 years. Data collection and analysis were performed concurrently using a grounded theory approach. The core category generated was the process of sustaining a family unit. The men came to understand the importance of a distribution of power within the family in sustaining the family unit. Constituting this process were four stages: recognizing an imbalance of power, relinquishing power, empowering, and fine-tuning the balance of power. This study provides important insight into the dynamics of marital power from men's point of view by demonstrating a link between the way people adjust to married life and the process by which married couples adjust through the distribution and redistribution of power.
Resumo:
Using ZnO seed layers, an efficient approach for enhancing the heterointerface quality of electrodeposited ZnO–Cu2O solar cells is devised. We introduce a sputtered ZnO seed layer followed by the sequential electrodeposition of ZnO and Cu2O films. The seed layer is employed to control the growth and crystallinity and to augment the surface area of the electrodeposited ZnO films, thereby tuning the quality of the ZnO–Cu2O heterointerface. Additionally, the seed layer also assists in forming high quality ZnO films, with no pin-holes, in a high pH electrolyte solution. X-ray electron diffraction patterns, scanning electron and atomic force microscopy images, as well as photovoltaic measurements, clearly demonstrate that the incorporation of certain seed layers results in the alteration of the heterointerface quality, a change in the heterojunction area and the crystallinity of the films near the junction, which influence the current density of photovoltaic devices.
Resumo:
Faulted stacking layers are ubiquitously observed during the crystal growth of semiconducting nanowires (NWs). In this paper, we employ the reverse non-equilibrium molecular dynamics simulation to elucidate the effect of various faulted stacking layers on the thermal conductivity (TC) of silicon (Si) NWs. We find that the stacking faults can greatly reduce the TC of the Si NW. Among the different stacking faults that are parallel to the NW's axis, the 9R polytype structure, the intrinsic and extrinsic stacking faults (iSFs and eSFs) exert more pronounced effects in the reduction of TC than the twin boundary (TB). However, for the perpendicularly aligned faulted stacking layers, the eSFs and 9R polytype structures are observed to induce a larger reduction to the TC of the NW than the TB and iSFs. For all considered NWs, the TC does not show a strong relation with the increasing number of faulted stacking layers. Our studies suggest the possibility of tuning the thermal properties of Si NWs by altering the crystal structure via the different faulted stacking layers.
Resumo:
In this paper we present a method for autonomously tuning the threshold between learning and recognizing a place in the world, based on both how the rodent brain is thought to process and calibrate multisensory data and the pivoting movement behaviour that rodents perform in doing so. The approach makes no assumptions about the number and type of sensors, the robot platform, or the environment, relying only on the ability of a robot to perform two revolutions on the spot. In addition, it self-assesses the quality of the tuning process in order to identify situations in which tuning may have failed. We demonstrate the autonomous movement-driven threshold tuning on a Pioneer 3DX robot in eight locations spread over an office environment and a building car park, and then evaluate the mapping capability of the system on journeys through these environments. The system is able to pick a place recognition threshold that enables successful environment mapping in six of the eight locations while also autonomously flagging the tuning failure in the remaining two locations. We discuss how the method, in combination with parallel work on autonomous weighting of individual sensors, moves the parameter dependent RatSLAM system significantly closer to sensor, platform and environment agnostic operation.
Resumo:
Whole-image descriptors such as GIST have been used successfully for persistent place recognition when combined with temporal filtering or sequential filtering techniques. However, whole-image descriptor localization systems often apply a heuristic rather than a probabilistic approach to place recognition, requiring substantial environmental-specific tuning prior to deployment. In this paper we present a novel online solution that uses statistical approaches to calculate place recognition likelihoods for whole-image descriptors, without requiring either environmental tuning or pre-training. Using a real world benchmark dataset, we show that this method creates distributions appropriate to a specific environment in an online manner. Our method performs comparably to FAB-MAP in raw place recognition performance, and integrates into a state of the art probabilistic mapping system to provide superior performance to whole-image methods that are not based on true probability distributions. The method provides a principled means for combining the powerful change-invariant properties of whole-image descriptors with probabilistic back-end mapping systems without the need for prior training or system tuning.
Resumo:
Nanomaterials are prone to influence by chemical adsorption because of their large surface to volume ratios. This enables sensitive detection of adsorbed chemical species which, in turn, can tune the property of the host material. Recent studies discovered that single and multi-layer molybdenum disulfide (MoS2) films are ultra-sensitive to several important environmental molecules. Here we report new findings from ab inito calculations that reveal substantially enhanced adsorption of NO and NH3 on strained monolayer MoS2 with significant impact on the properties of the adsorbates and the MoS2 layer. The magnetic moment of adsorbed NO can be tuned between 0 and 1 μB; strain also induces an electronic phase transition between half-metal and metal. Adsorption of NH3 weakens the MoS2 layer considerably, which explains the large discrepancy between the experimentally measured strength and breaking strain of MoS2 films and previous theoretical predictions. On the other hand, adsorption of NO2, CO, and CO2 is insensitive to the strain condition in the MoS2 layer. This contrasting behavior allows sensitive strain engineering of selective chemical adsorption on MoS2 with effective tuning of mechanical, electronic, and magnetic properties. These results suggest new design strategies for constructing MoS2-based ultrahigh-sensitivity nanoscale sensors and electromechanical devices.
Resumo:
Whole image descriptors have recently been shown to be remarkably robust to perceptual change especially compared to local features. However, whole-image-based localization systems typically rely on heuristic methods for determining appropriate matching thresholds in a particular environment. These environment-specific tuning requirements and the lack of a meaningful interpretation of these arbitrary thresholds limits the general applicability of these systems. In this paper we present a Bayesian model of probability for whole-image descriptors that can be seamlessly integrated into localization systems designed for probabilistic visual input. We demonstrate this method using CAT-Graph, an appearance-based visual localization system originally designed for a FAB-MAP-style probabilistic input. We show that using whole-image descriptors as visual input extends CAT-Graph’s functionality to environments that experience a greater amount of perceptual change. We also present a method of estimating whole-image probability models in an online manner, removing the need for a prior training phase. We show that this online, automated training method can perform comparably to pre-trained, manually tuned local descriptor methods.
Resumo:
A known limitation of the Probability Ranking Principle (PRP) is that it does not cater for dependence between documents. Recently, the Quantum Probability Ranking Principle (QPRP) has been proposed, which implicitly captures dependencies between documents through “quantum interference”. This paper explores whether this new ranking principle leads to improved performance for subtopic retrieval, where novelty and diversity is required. In a thorough empirical investigation, models based on the PRP, as well as other recently proposed ranking strategies for subtopic retrieval (i.e. Maximal Marginal Relevance (MMR) and Portfolio Theory(PT)), are compared against the QPRP. On the given task, it is shown that the QPRP outperforms these other ranking strategies. And unlike MMR and PT, one of the main advantages of the QPRP is that no parameter estimation/tuning is required; making the QPRP both simple and effective. This research demonstrates that the application of quantum theory to problems within information retrieval can lead to significant improvements.