36 resultados para Large-scale Testing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The seminal multiple-view stereo benchmark evaluations from Middlebury and by Strecha et al. have played a major role in propelling the development of multi-view stereopsis (MVS) methodology. The somewhat small size and variability of these data sets, however, limit their scope and the conclusions that can be derived from them. To facilitate further development within MVS, we here present a new and varied data set consisting of 80 scenes, seen from 49 or 64 accurate camera positions. This is accompanied by accurate structured light scans for reference and evaluation. In addition all images are taken under seven different lighting conditions. As a benchmark and to validate the use of our data set for obtaining reasonable and statistically significant findings about MVS, we have applied the three state-of-the-art MVS algorithms by Campbell et al., Furukawa et al., and Tola et al. to the data set. To do this we have extended the evaluation protocol from the Middlebury evaluation, necessitated by the more complex geometry of some of our scenes. The data set and accompanying evaluation framework are made freely available online. Based on this evaluation, we are able to observe several characteristics of state-of-the-art MVS, e.g. that there is a tradeoff between the quality of the reconstructed 3D points (accuracy) and how much of an object’s surface is captured (completeness). Also, several issues that we hypothesized would challenge MVS, such as specularities and changing lighting conditions did not pose serious problems. Our study finds that the two most pressing issues for MVS are lack of texture and meshing (forming 3D points into closed triangulated surfaces).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For the treatment and monitoring of Parkinson's disease (PD) to be scientific, a key requirement is that measurement of disease stages and severity is quantitative, reliable, and repeatable. The last 50 years in PD research have been dominated by qualitative, subjective ratings obtained by human interpretation of the presentation of disease signs and symptoms at clinical visits. More recently, “wearable,” sensor-based, quantitative, objective, and easy-to-use systems for quantifying PD signs for large numbers of participants over extended durations have been developed. This technology has the potential to significantly improve both clinical diagnosis and management in PD and the conduct of clinical studies. However, the large-scale, high-dimensional character of the data captured by these wearable sensors requires sophisticated signal processing and machine-learning algorithms to transform it into scientifically and clinically meaningful information. Such algorithms that “learn” from data have shown remarkable success in making accurate predictions for complex problems in which human skill has been required to date, but they are challenging to evaluate and apply without a basic understanding of the underlying logic on which they are based. This article contains a nontechnical tutorial review of relevant machine-learning algorithms, also describing their limitations and how these can be overcome. It discusses implications of this technology and a practical road map for realizing the full potential of this technology in PD research and practice. © 2016 International Parkinson and Movement Disorder Society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study presents a computational parametric analysis of DME steam reforming in a large scale Circulating Fluidized Bed (CFB) reactor. The Computational Fluid Dynamic (CFD) model used, which is based on Eulerian-Eulerian dispersed flow, has been developed and validated in Part I of this study [1]. The effect of the reactor inlet configuration, gas residence time, inlet temperature and steam to DME ratio on the overall reactor performance and products have all been investigated. The results have shown that the use of double sided solid feeding system remarkable improvement in the flow uniformity, but with limited effect on the reactions and products. The temperature has been found to play a dominant role in increasing the DME conversion and the hydrogen yield. According to the parametric analysis, it is recommended to run the CFB reactor at around 300 °C inlet temperature, 5.5 steam to DME molar ratio, 4 s gas residence time and 37,104 ml gcat -1 h-1 space velocity. At these conditions, the DME conversion and hydrogen molar concentration in the product gas were both found to be around 80%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Some color centers in diamond can serve as quantum bits which can be manipulated with microwave pulses and read out with laser, even at room temperature. However, the photon collection efficiency of bulk diamond is greatly reduced by refraction at the diamond/air interface. To address this issue, we fabricated arrays of diamond nanostructures, differing in both diameter and top end shape, with HSQ and Cr as the etching mask materials, aiming toward large scale fabrication of single-photon sources with enhanced collection efficiency made of nitrogen vacancy (NV) embedded diamond. With a mixture of O2 and CHF3 gas plasma, diamond pillars with diameters down to 45 nm were obtained. The top end shape evolution has been represented with a simple model. The tests of size dependent single-photon properties confirmed an improved single-photon collection efficiency enhancement, larger than tenfold, and a mild decrease of decoherence time with decreasing pillar diameter was observed as expected. These results provide useful information for future applications of nanostructured diamond as a single-photon source.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose - The purpose of this paper is to analyse the concept of supply chain collaboration and to provide an overall framework that can be used as a conceptual landmark for further empirical research. In addition, the concept is explored in the context of agri-food industry and particularities are identified. Finally, the paper submits empirical evidence from an exploratory case study in the agri-food industry, at the grower-processor interface, and information regarding the way the concept is actually applied in small medium-sized enterprises (SMEs) is presented. Design/methodology/approach - The paper employed case study research by conducting in-depth interviews in the two companies. Findings - Supply chain collaboration concept is of significant importance for the agri-food industry however, some constraints arise due to the nature of industry's products, and the specific structure of the sector. Subsequently, collaboration in the supply chain is often limited to operational issues and to logistics-related activities. Research limitations/implications - Research is limited to a single case study and further qualitative testing of the conceptual model is needed in order to adjust the model before large scale testing. Practical implications - Case study findings may be transferable to other similar dual relationships at the grower-processor interface. Weaker parts in asymmetric relationships have opportunities to improve their position, altering the dependence balance, by achieving product/process excellence. Originality/value - The paper provides evidence regarding the applicability of the supply chain collaboration concept in the agri-food industry. It takes into consideration not relationships between big multinational companies, but SMEs. © Emerald Group Publishing Limited.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in the area of industrial metrology have generated new technologies that are capable of measuring components with complex geometry and large dimensions. However, no standard or best-practice guides are available for the majority of such systems. Therefore, these new systems require appropriate testing and verification in order for the users to understand their full potential prior to their deployment in a real manufacturing environment. This is a crucial stage, especially when more than one system can be used for a specific measurement task. In this paper, two relatively new large-volume measurement systems, the mobile spatial co-ordinate measuring system (MScMS) and the indoor global positioning system (iGPS), are reviewed. These two systems utilize different technologies: the MScMS is based on ultrasound and radiofrequency signal transmission and the iGPS uses laser technology. Both systems have components with small dimensions that are distributed around the measuring area to form a network of sensors allowing rapid dimensional measurements to be performed in relation to large-size objects, with typical dimensions of several decametres. The portability, reconfigurability, and ease of installation make these systems attractive for many industries that manufacture large-scale products. In this paper, the major technical aspects of the two systems are briefly described and compared. Initial results of the tests performed to establish the repeatability and reproducibility of these systems are also presented. © IMechE 2009.