15 resultados para Random lead times and bulk demands
em Cambridge University Engineering Department Publications Database
Resumo:
This paper presents the results of a study that specifically looks at the relationships between measured user capabilities and product demands in a sample of older and disabled users. An empirical study was conducted with 19 users performing tasks with four consumer products (a clock-radio, a mobile phone, a blender and a vacuum cleaner). The sensory, cognitive and motor capabilities of each user were measured using objective capability tests. The study yielded a rich dataset comprising capability measures, product demands, outcome measures (task times and errors), and subjective ratings of difficulty. Scatter plots were produced showing quantified product demands on user capabilities, together with subjective ratings of difficulty. The results are analysed in terms of the strength of correlations observed taking into account the limitations of the study sample. Directions for future research are also outlined. © 2011 Springer-Verlag.
Resumo:
The Schottky barrier heights of various metals on the high permitivity oxides tantalum pentoxide, barium strontium titanate, lead zirconate titanate, and strontium bismuth tantalate have been calculated as a function of the metal work function. It is found that these oxides have a dimensionless Schottky barrier pinning factor S of 0.28-0.4 and not close to 1 because S is controlled by Ti-O-type bonds not Sr-O-type bonds, as assumed in earlier work. The band offsets on silicon are asymmetric with a much smaller offset at the conduction band, so that Ta2O5 and barium strontium titanate are relatively poor barriers to electrons on Si. © 1999 American Institute of Physics.
Resumo:
A case study of an aircraft engine manufacturer is used to analyze the effects of management levers on the lead time and design errors generated in an iteration-intensive concurrent engineering process. The levers considered are amount of design-space exploration iteration, degree of process concurrency, and timing of design reviews. Simulation is used to show how the ideal combination of these levers can vary with changes in design problem complexity, which can increase, for instance, when novel technology is incorporated in a design. Results confirm that it is important to consider multiple iteration-influencing factors and their interdependencies to understand concurrent processes, because the factors can interact with confounding effects. The article also demonstrates a new approach to derive a system dynamics model from a process task network. The new approach could be applied to analyze other concurrent engineering scenarios. © The Author(s) 2012.
Resumo:
We show the feasibility of using quantum Monte Carlo (QMC) to compute benchmark energies for configuration samples of thermal-equilibrium water clusters and the bulk liquid containing up to 64 molecules. Evidence that the accuracy of these benchmarks approaches that of basis-set converged coupled-cluster calculations is noted. We illustrate the usefulness of the benchmarks by using them to analyze the errors of the popular BLYP approximation of density functional theory (DFT). The results indicate the possibility of using QMC as a routine tool for analyzing DFT errors for non-covalent bonding in many types of condensed-phase molecular system.
Resumo:
We present Random Partition Kernels, a new class of kernels derived by demonstrating a natural connection between random partitions of objects and kernels between those objects. We show how the construction can be used to create kernels from methods that would not normally be viewed as random partitions, such as Random Forest. To demonstrate the potential of this method, we propose two new kernels, the Random Forest Kernel and the Fast Cluster Kernel, and show that these kernels consistently outperform standard kernels on problems involving real-world datasets. Finally, we show how the form of these kernels lend themselves to a natural approximation that is appropriate for certain big data problems, allowing $O(N)$ inference in methods such as Gaussian Processes, Support Vector Machines and Kernel PCA.