939 resultados para Distance between Experts’ Statements
Resumo:
Studies on quantitative fit analysis of precontoured fracture fixation plates emerged within the last few years and therefore, there is a wide research gap in this area. Quantitative fit assessment facilitates the measure of the gap between a fracture fixation plate and the underlying bone, and specifies the required plate fit criteria. For clinically meaningful fit assessment outcome, it is necessary to establish the appropriate criteria and parameter. The present paper studies this subject and recommends using multiple fit criteria and the maximum distance between the plate and underlying bone as fit parameter for clinically relevant outcome. We also propose the development of a software tool for automatic plate positioning and fit assessment for the purpose of implant design validation and optimization in an effort to provide better fitting implant that can assist proper fracture healing. The fundamental specifications of the software are discussed.
Resumo:
Bandwidths and offsets are important components in vehicle traffic control strategies. This article proposes new methods for quantifying and selecting them. Bandwidth is the amount of green time available for vehicles to travel through adjacent intersections without the requirement to stop at the second traffic light. The offset is the difference between the starting-time of ``green'' periods at two adjacent intersections, along a given route. The core ideas in this article were developed during the 2013 Maths and Industry Study Group in Brisbane, Australia. Analytical expressions for computing bandwidth, as a function of offset, are developed. An optimisation model, for selecting offsets across an arterial, is proposed. Arterial roads were focussed upon, as bandwidth and offset have a greater impact on these types of road as opposed to a full traffic network. A generic optimisation-simulation approach is also proposed to refine an initial starting solution, according to a specified metric. A metric that reflects the number of stops, and the distance between stops, is proposed to explicitly reduce the dissatisfaction of road users, and to implicitly reduce fuel consumption and emissions. Conceptually the optimisation-simulation approach is superior as it handles real-life complexities and is a global optimisation approach. The models and equations in this article can be used in road planning and traffic control.
Resumo:
With the growing size and variety of social media files on the web, it’s becoming critical to efficiently organize them into clusters for further processing. This paper presents a novel scalable constrained document clustering method that harnesses the power of search engines capable of dealing with large text data. Instead of calculating distance between the documents and all of the clusters’ centroids, a neighborhood of best cluster candidates is chosen using a document ranking scheme. To make the method faster and less memory dependable, the in-memory and in-database processing are combined in a semi-incremental manner. This method has been extensively tested in the social event detection application. Empirical analysis shows that the proposed method is efficient both in computation and memory usage while producing notable accuracy.
Resumo:
Existing multi-model approaches for image set classification extract local models by clustering each image set individually only once, with fixed clusters used for matching with other image sets. However, this may result in the two closest clusters to represent different characteristics of an object, due to different undesirable environmental conditions (such as variations in illumination and pose). To address this problem, we propose to constrain the clustering of each query image set by forcing the clusters to have resemblance to the clusters in the gallery image sets. We first define a Frobenius norm distance between subspaces over Grassmann manifolds based on reconstruction error. We then extract local linear subspaces from a gallery image set via sparse representation. For each local linear subspace, we adaptively construct the corresponding closest subspace from the samples of a probe image set by joint sparse representation. We show that by minimising the sparse representation reconstruction error, we approach the nearest point on a Grassmann manifold. Experiments on Honda, ETH-80 and Cambridge-Gesture datasets show that the proposed method consistently outperforms several other recent techniques, such as Affine Hull based Image Set Distance (AHISD), Sparse Approximated Nearest Points (SANP) and Manifold Discriminant Analysis (MDA).
Resumo:
We consider the problem of combining opinions from different experts in an explicitly model-based way to construct a valid subjective prior in a Bayesian statistical approach. We propose a generic approach by considering a hierarchical model accounting for various sources of variation as well as accounting for potential dependence between experts. We apply this approach to two problems. The first problem deals with a food risk assessment problem involving modelling dose-response for Listeria monocytogenes contamination of mice. Two hierarchical levels of variation are considered (between and within experts) with a complex mathematical situation due to the use of an indirect probit regression. The second concerns the time taken by PhD students to submit their thesis in a particular school. It illustrates a complex situation where three hierarchical levels of variation are modelled but with a simpler underlying probability distribution (log-Normal).
Resumo:
PURPOSE To compare diffusion-weighted functional magnetic resonance imaging (DfMRI), a novel alternative to the blood oxygenation level-dependent (BOLD) contrast, in a functional MRI experiment. MATERIALS AND METHODS Nine participants viewed contrast reversing (7.5 Hz) black-and-white checkerboard stimuli using block and event-related paradigms. DfMRI (b = 1800 mm/s2 ) and BOLD sequences were acquired. Four parameters describing the observed signal were assessed: percent signal change, spatial extent of the activation, the Euclidean distance between peak voxel locations, and the time-to-peak of the best fitting impulse response for different paradigms and sequences. RESULTS The BOLD conditions showed a higher percent signal change relative to DfMRI; however, event-related DfMRI showed the strongest group activation (t = 21.23, P < 0.0005). Activation was more diffuse and spatially closer to the BOLD response for DfMRI when the block design was used. DfMRIevent showed the shortest TTP (4.4 +/- 0.88 sec). CONCLUSION The hemodynamic contribution to DfMRI may increase with the use of block designs.
Resumo:
Inductive fault current limiters (FCLs) have several advantages, such as significant current limitation, immediate triggering and relatively low losses. Despite these advantages, saturated core FCLs have not been commercialized due to its large size and associated high costs. A major remaining challenge is to reduce the footprint of the device. In this paper, a solution to reduce the overall footprint is proposed and discussed. In arrangements of windings on a core in reactors such as FCLs, the core is conventionally grounded. The electrical insulation distance between high voltage winding and core can be reduced if the core is left at floating potential. This paper shows the results of the investigation carried out on the insulation of such a coil-core assembly. Two experiments were conducted. In the first, the behavior of the apparatus under high voltage conditions was assessed by performing power frequency and lightning impulse tests. In the second experiment, a low voltage test was conducted during which voltages of different frequencies and pulses with varying rise times were applied. A finite element simulation was also carried out for comparison and further investigation
Resumo:
The proper function of the spindle is crucial to the high fidelity of chromosome segregation and is indispensable for tumor suppression in humans. Centrobin is a recently identified centrosomal protein that has a role in stabilizing the microtubule structure. Here we functionally characterize the defects in centrosome integrity and spindle assembly in Centrobin-depleted cells. Centrobin-depleted cells show a range of spindle abnormalities including unfocused poles that are not associated with centrosomes, S-shaped spindles and mini spindles. These cells undergo mitotic arrest and subsequently often die by apoptosis, as determined by live cell imaging. Co-depletion of Mad2 relieves the mitotic arrest, indicating that cells arrest due to a failure to silence the spindle checkpoint in metaphase. Consistent with this, Centrobin-depleted metaphase cells stained positive for BubR1 and BubR1 S676. Staining with a panel of centrosome markers showed a loss of centrosome anchoring to the mitotic spindle. Furthermore, these cells show less cold-stable microtubules and a shorter distance between kinetochore pairs. These results show a requirement of Centrobin in maintaining centrosome integrity, which in turn promotes anchoring of mitotic spindle to the centrosomes. Furthermore, this anchoring is required for the stability of microtubule–kinetochore attachments and biogenesis of tension-ridden and properly functioning mitotic spindle.
Resumo:
We propose a topological localization method based on optical flow information. We analyse the statistical characteristics of the optical flow signal and demonstrate that the flow vectors can be used to identify and describe key locations in the environment. The key locations (nodes) correspond to significant scene changes and depth discontinuities. Since optical flow vectors contain position, magnitude and angle information, for each node, we extract low and high order statistical moments of the vectors and use them as descriptors for that node. Once a database of nodes and their corresponding optical flow features is created, the robot can perform topological localization by using the Mahalanobis distance between the current frame and the database. This is supported by field trials, which illustrate the repeatability of the proposed method for detecting and describing key locations in indoor and outdoor environments in challenging and diverse lighting conditions.
Knowledge Transfer in Transnational Programmes : Opportunities and Challenges for the Pacific Region
Resumo:
The globalised world: The current higher education community The last decade has seen rapid changes in the landscape of higher education (HE) throughout the world, largely as a product of globalisation. A major effect has been to propel the interconnectedness between nations and people across the globe (Scholte, 2005). The use of information and communication technology (ICT) has diminished the distance between countries. The world’s economies are becoming more integrated and interrelated through neoliberal economic policies, free trade agreements and open access of goods and services beyond national borders, policies promulgated by organisations such as the World Trade Organization and The World Bank (Marginson & Ordorika, 2011; Mok, 2011). As a consequence, universities are operating at global, national and local levels simultaneously. In the Pacific region, new universities are emerging. For example, Fiji now has one regional and two national universities; Samoa has a national university and Solomon Islands has an institute of higher education. These new players add to regional competition as they open opportunities for global partnerships and transnational programmes. Thus, participating at these multiple levels is inevitable, and no university is immune to these changes (Marginson, Kaur & Sawir, 2011a). Universities are now part of the global HE community that cannot be confined within a nation’s borders. Transitional HE programmes are perhaps one of the most evident demonstrations of the interconnectedness of universities across countries in this global era.
Resumo:
Aim A new method of penumbral analysis is implemented which allows an unambiguous determination of field size and penumbra size and quality for small fields and other non-standard fields. Both source occlusion and lateral electronic disequilibrium will affect the size and shape of cross-axis profile penumbrae; each is examined in detail. Method A new method of penumbral analysis is implemented where the square of the derivative of the cross-axis profile is plotted. The resultant graph displays two peaks in the place of the two penumbrae. This allows a strong visualisation of the quality of a field penumbra, as well as a mathematically consistent method of determining field size (distance between the two peak’s maxima), and penumbra (full-widthtenth-maximum of peak). Cross-axis profiles were simulated in a water phantom at a depth of 5 cm using Monte Carlo modelling, for field sizes between 5 and 30 mm. The field size and penumbra size of each field was calculated using the method above, as well as traditional definitions set out in IEC976. The effect of source occlusion and lateral electronic disequilibrium on the penumbrae was isolated by repeating the simulations removing electron transport and using an electron spot size of 0 mm, respectively. Results All field sizes calculated using the traditional and proposed methods agreed within 0.2 mm. The penumbra size measured using the proposed method was systematically 1.8 mm larger than the traditional method at all field sizes. The size of the source had a larger effect on the size of the penumbra than did lateral electronic disequilibrium, particularly at very small field sizes. Conclusion Traditional methods of calculating field size and penumbra are proved to be mathematically adequate for small fields. However, the field size definition proposed in this study would be more robust amongst other nonstandard fields, such as flattening filter free. Source occlusion plays a bigger role than lateral electronic disequilibrium in small field penumbra size.
Resumo:
A common finding in brand extension literature is that extension’s favorability is a function of the perceived fit between the parent brand and its extension (Aaker and Keller 1990; Park, Milberg, and Lawson 1991; Volckner and Sattler 2006) that is partially mediated by perceptions of risk (Milberg, Sinn, and Goodstein 2010; Smith and Andrews 1995). In other words, as fit between the parent brand and its extension increases, parent brand beliefs become more readily available, thus increasing consumer certainty and confidence about the new extension, which results in more positive evaluations. On the other hand, as perceived fit decreases, consumer certainty about the parent brand’s ability to introduce the extension is reduced, leading to more negative evaluations. Building on the notion that perceived fit of vertical line extensions is a function of the price/quality distance between parent brand and its extension (Lei, de Ruyter, and Wetzels 2008), traditional brand extension knowledge predicts a directionally consistent impact of perceived fit on evaluations of vertical extensions. Hence, vertical (upscale or downscale) extensions that are placed closer to the parent brand in the price/quality spectrum should lead to higher favorability ratings compared to more distant ones.
Resumo:
Traditionally the notion of drawing in-situ has suggested the physical presence of the artist in the environment under scrutiny. The assumption here of enhanced connectivity, however, is hasty in light of the idea that situation implies a relative spatial value determined by the interplay of subject and location, where the possibility of not being “in-situ” is problematic. The fact that traditional drawing in-situ, such as the rendering of landscape, requires a framing of the world “out there” suggests a distance between the perceived object of representation and the drawing surface. Rather than suggesting that some drawing is situated and other sorts of drawing are not, however, I argue that situation or site is variously extended and intensified depending on the nature of mediation between surface and environment. The suggestion here is that site is not so much a precondition as a performative function, developed in the act of drawing and always implicating the drawing surface. In my discussion I focus on specific works by Toba Khedoori and Cameron Robbins. As well, in using my own recent drawing practice as a case study, I argue that the geography of site is delimited neither by horizon nor the boundaries of the paper. Rather, I propose that site and drawing surface coincide in variously intensive and extensive ways.
Resumo:
The most important aspect of modelling a geological variable, such as metal grade, is the spatial correlation. Spatial correlation describes the relationship between realisations of a geological variable sampled at different locations. Any method for spatially modelling such a variable should be capable of accurately estimating the true spatial correlation. Conventional kriged models are the most commonly used in mining for estimating grade or other variables at unsampled locations, and these models use the variogram or covariance function to model the spatial correlations in the process of estimation. However, this usage assumes the relationships of the observations of the variable of interest at nearby locations are only influenced by the vector distance between the locations. This means that these models assume linear spatial correlation of grade. In reality, the relationship with an observation of grade at a nearby location may be influenced by both distance between the locations and the value of the observations (ie non-linear spatial correlation, such as may exist for variables of interest in geometallurgy). Hence this may lead to inaccurate estimation of the ore reserve if a kriged model is used for estimating grade of unsampled locations when nonlinear spatial correlation is present. Copula-based methods, which are widely used in financial and actuarial modelling to quantify the non-linear dependence structures, may offer a solution. This method was introduced by Bárdossy and Li (2008) to geostatistical modelling to quantify the non-linear spatial dependence structure in a groundwater quality measurement network. Their copula-based spatial modelling is applied in this research paper to estimate the grade of 3D blocks. Furthermore, real-world mining data is used to validate this model. These copula-based grade estimates are compared with the results of conventional ordinary and lognormal kriging to present the reliability of this method.
Resumo:
A facile and sensitive surface-enhanced Raman scattering substrate was prepared by controlled potentiostatic deposition of a closely packed single layer of gold nanostructures (AuNS) over a flat gold (pAu) platform. The nanometer scale inter-particle distance between the particles resulted in high population of ‘hot spots’ which enormously enhanced the scattered Raman photons. A renewed methodology was followed to precisely quantify the SERS substrate enhancement factor (SSEF) and it was estimated to be (2.2 ± 0.17) × 105. The reproducibility of the SERS signal acquired by the developed substrate was tested by establishing the relative standard deviation (RSD) of 150 repeated measurements from various locations on the substrate surface. A low RSD of 4.37 confirmed the homogeneity of the developed substrate. The sensitivity of pAu/AuNS was proven by determining 100 fM 2,4,6-trinitrotoluene (TNT) comfortably. As a proof of concept on the potential of the new pAu/AuNS substrate in field analysis, TNT in soil and water matrices was selectively detected after forming a Meisenheimer complex with cysteamine.