18 resultados para Logical necessity
em Cambridge University Engineering Department Publications Database
Resumo:
While it is well known that it is possible to determine the effective flexoelectric coefficient of nematic liquid crystals using hybrid cells [1], this technique can be difficult due to the necessity of using a D.C. field. We have used a second method[2], requiring an A.C. field, to determine this parameter and here we compare the two techniques. The A.C. method employs the linear flexoelectrically induced linear electro-optic switching mechanism observed in chiral nematics. In order to use this second technique a chiral nematic phase is induced in an achiral nematic by the addition of a small amount of chiral additive (∼3% concentration w/w) to give helix pitch lengths of typically 0.5-1.0 μm. We note that the two methods can be used interchangeably, since they produce similar results, and we conclude with a discussion of their relative merits.
Resumo:
Reliable estimates for the maximum available uplift resistance from the backfill soil are essential to prevent upheaval buckling of buried pipelines. The current design code DNV RP F110 does not offer guidance on how to predict the uplift resistance when the cover:pipe diameter (H/D) ratio is less than 2. Hence the current industry practice is to discount the shear contribution from uplift resitance for design scenarios with H/D ratios less than 1. The necessity of this extra conservatism is assessed through a series of full-scale and centrifuge tests, 21 in total, at the Schofield Centre, University of Cambridge. Backfill types include saturated loose sand, saturated dense sand and dry gravel. Data revealed that the Vertical Slip Surface Model remains applicable for design scenarios in loose sand, dense sand and gravel with H/D ratios less than 1, and that there is no evidence that the contribution from shear should be ignored at these low H/D ratios. For uplift events in gravel, the shear component seems reliable if the cover is more than 1-2 times the average particle size (D50), and more research effort is currenty being carried out to verify this conclusion. Strain analysis from the Particle Image Velocimetry (PIV) technique proves that the Vertical Slip Surface Model is a good representation of the true uplift deformation mechanism in loose sand at H/D ratios between 0.5 and 3.5. At very low H/D ratios (H/D < 0.5), the deformation mechanism is more wedge-like, but the increased contribution from soil weight is likely to be compensated by the reduced shear contributions. Hence the design equation based on the Vertical Slip Surface Model still produces good estimates for the maximum available uplift resistance. The evolution of shear strain field from PIV analysis provides useful insight into how uplift resistance is mobilized as the uplift event progresses. Copyright 2010, Offshore Technology Conference.
Resumo:
The manufacturing industry is currently facing unprecedented challenges from changes and disturbances. The sources of these changes and disturbances are of different scope and magnitude. They can be of a commercial nature, or linked to fast product development and design, or purely operational (e.g. rush order, machine breakdown, material shortage etc.). In order to meet these requirements it is increasingly important that a production operation be flexible and is able to adapt to new and more suitable ways of operating. This paper focuses on a new strategy for enabling manufacturing control systems to adapt to changing conditions both in terms of product variation and production system upgrades. The approach proposed is based on two key concepts: (1) An autonomous and distributed approach to manufacturing control based on multi-agent methods in which so called operational agents represent the key physical and logical elements in the production environment to be controlled - for example, products and machines and the control strategies that drive them and (2) An adaptation mechanism based around the evolutionary concept of replicator dynamics which updates the behaviour of newly formed operational agents based on historical performance records in order to be better suited to the production environment. An application of this approach for route selection of similar products in manufacturing flow shops is developed and is illustrated in this paper using an example based on the control of an automobile paint shop.
Resumo:
Spatial light modulators based around liquid crystal on silicon have found use in a variety of telecommunications applications, including the optimization of multimode fibers, free-space communications, and wavelength selective switching. Ferroelectric liquid crystals are attractive in these areas due to their fast switching times and high phase stability, but the necessity for the liquid crystal to spend equal time in each of its two possible states is an issue of practical concern. Using the highly parallel nature of a graphics processing unit architecture, it is possible to calculate DC balancing schemes of exceptional quality and stability.
Resumo:
This review will focus on the possibility that the cerebellum contains an internal model or models of the motor apparatus. Inverse internal models can provide the neural command necessary to achieve some desired trajectory. First, we review the necessity of such a model and the evidence, based on the ocular following response, that inverse models are found within the cerebellar circuitry. Forward internal models predict the consequences of actions and can be used to overcome time delays associated with feedback control. Secondly, we review the evidence that the cerebellum generates predictions using such a forward model. Finally, we review a computational model that includes multiple paired forward and inverse models and show how such an arrangement can be advantageous for motor learning and control.
Resumo:
Recent research into the acquisition of spoken language has stressed the importance of learning through embodied linguistic interaction with caregivers rather than through passive observation. However the necessity of interaction makes experimental work into the simulation of infant speech acquisition difficult because of the technical complexity of building real-time embodied systems. In this paper we present KLAIR: a software toolkit for building simulations of spoken language acquisition through interactions with a virtual infant. The main part of KLAIR is a sensori-motor server that supplies a client machine learning application with a virtual infant on screen that can see, hear and speak. By encapsulating the real-time complexities of audio and video processing within a server that will run on a modern PC, we hope that KLAIR will encourage and facilitate more experimental research into spoken language acquisition through interaction. Copyright © 2009 ISCA.
Resumo:
Observation shows that the watershed-scale models in common use in the United States (US) differ from those used in the European Union (EU). The question arises whether the difference in model use is due to familiarity or necessity. Do conditions in each continent require the use of unique watershed-scale models, or are models sufficiently customizable that independent development of models that serve the same purpose (e.g., continuous/event- based, lumped/distributed, field-Awatershed-scale) is unnecessary? This paper explores this question through the application of two continuous, semi-distributed, watershed-scale models (HSPF and HBV-INCA) to a rural catchment in southern England. The Hydrological Simulation Program-Fortran (HSPF) model is in wide use in the United States. The Integrated Catchments (INCA) model has been used extensively in Europe, and particularly in England. The results of simulation from both models are presented herein. Both models performed adequately according to the criteria set for them. This suggests that there was not a necessity to have alternative, yet similar, models. This partially supports a general conclusion that resources should be devoted towards training in the use of existing models rather than development of new models that serve a similar purpose to existing models. A further comparison of water quality predictions from both models may alter this conclusion.
Resumo:
In this paper we examine the use of electronic patient records (EPR) by clinical specialists in their development of multidisciplinary care for diagnosis and treatment of breast cancer. We develop a practice theory lens to investigate EPR use across multidisciplinary team practice. Our findings suggest that there are oppositional tendencies towards diversity in EPR use and unity which emerges across multidisciplinary work, and this influences the outcomes of EPR use. The value of this perspective is illustrated through the analysis of a year-long, longitudinal case study of a multidisciplinary team of surgeons, oncologists, pathologists, radiologists, and nurse specialists adopting a new EPR. Each group adapted their use of the EPR to their diverse specialist practices, but they nonetheless orientated their use of the EPR to each others' practices sufficiently to support unity in multidisciplinary teamwork. Multidisciplinary practice elements were also reconfigured in an episode of explicit negotiations, resulting in significant changes in EPR use within team meetings. Our study contributes to the growing literature that questions the feasibility and necessity of achieving high levels of standardized, uniform health information technology use in healthcare.
Resumo:
One of the major concerns for engineers in seismically active regions is the prevention of damage caused by earthquake-induced soil liquefaction. Vertical drains can aid dissipation of excess pore pressures both during and after earthquakes. Drain systems are designed using standard design charts based around the concept of a unit cell, assuming each drain is surrounded by more drains. It is unclear how predictable drain performance is outside that unit cell concept, for example, drains at the edge of a group. Centrifuge testing is a logical method of performing controlled experiments to establish the efficacy of vertical drains. Centrifuge testing is used to identify the effect of drains dealing with very different catchment areas. The importance of this is further highlighted by the results of a test where the same drains have been modified so that each should behave as a unit cell. It is shown that drains with large catchment areas perform more poorly than unit cells, and also have a knock-on detrimental effect on other drains. Copyright © 2011, IGI Global.
Resumo:
We propose a new learning method to infer a mid-level feature representation that combines the advantage of semantic attribute representations with the higher expressive power of non-semantic features. The idea lies in augmenting an existing attribute-based representation with additional dimensions for which an autoencoder model is coupled with a large-margin principle. This construction allows a smooth transition between the zero-shot regime with no training example, the unsupervised regime with training examples but without class labels, and the supervised regime with training examples and with class labels. The resulting optimization problem can be solved efficiently, because several of the necessity steps have closed-form solutions. Through extensive experiments we show that the augmented representation achieves better results in terms of object categorization accuracy than the semantic representation alone. © 2012 Springer-Verlag.
Resumo:
TRIZ (the theory of inventive problem solving) has been promoted by several enthusiasts as a systematic methodology or toolkit that provides a logical approach to developing creativity for innovation and inventive problem solving. The methodology, which emerged from Russia in the 1960s, has spread to over 35 countries across the world. It is now being taught in several universities and it has been applied by a number of global organisations who have found it particularly useful for spurring new product development. However, while its popularity and attractiveness appear to be on a steady increase, there are practical issues which make the use of TRIZ in practice particularly challenging. These practical difficulties have largely been neglected by TRIZ literature. This paper takes a step away from conventional TRIZ literature, by exploring not just the benefits associated with TRIZ knowledge, but the challenges associated with its acquisition and application based on practical experience. Through a survey, first-hand information is collected from people who have tried (successfully and unsuccessfully) to understand and apply the methodology. The challenges recorded cut across a number of issues, ranging from the complex nature of the methodology to underlying organisational and cultural issues which hinder its understanding and application. Another contribution of this paper, potentially useful for TRIZ beginners, is the indication of what tools among the several contained in the TRIZ toolkit would be most useful to learn first, based on their observed degree of usage by the survey respondents. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
There is a growing interest in using 242mAm as a nuclear fuel. The advantages of 242mAm as a nuclear fuel derive from the fact that 242mAm has the highest thermal fission cross section. The thermal capture cross section is relatively low and the number of neutrons per thermal fission is high. These nuclear properties make it possible to obtain nuclear criticality with ultra-thin fuel elements. The possibility of having ultra-thin fuel elements enables the use of these fission products directly, without the necessity of converting their energy to heat, as is done in conventional reactors. There are three options of using such highly energetic and highly ionized fission products. 1. Using the fission products themselves for ionic propulsion. 2. Using the fission products in an MHD generator, in order to obtain electricity directly. 3. Using the fission products to heat a gas up to a high temperature for propulsion purposes. In this work, we are not dealing with a specific reactor design, but only calculating the minimal fuel elements' thickness and the energy of the fission products emerging from these fuel elements. It was found that it is possible to design a nuclear reactor with a fuel element of less than 1 μm of 242mAm. In such a fuel element, 90% of the fission products' energy can escape.
Resumo:
In the modern engineering design cycle the use of computational tools becomes a necessity. The complexity of the engineering systems under consideration for design increases dramatically as the demands for advanced and innovative design concepts and engineering products is expanding. At the same time the advancements in the available technology in terms of computational resources and power, as well as the intelligence of the design software, accommodate these demands and make them a viable approach towards the challenge of real-world engineering problems. This class of design optimisation problems is by nature multi-disciplinary. In the present work we establish enhanced optimisation capabilities within the Nimrod/O tool for massively distributed execution of computational tasks through cluster and computational grid resources, and develop the potential to combine and benefit from all the possible available technological advancements, both software and hardware. We develop the interface between a Free Form Deformation geometry management in-house code with the 2D airfoil aerodynamic efficiency evaluation tool XFoil, and the well established multi-objective heuristic optimisation algorithm NSGA-II. A simple airfoil design problem has been defined to demonstrate the functionality of the design system, but also to accommodate a framework for future developments and testing with other state-of-the-art optimisation algorithms such as the Multi-Objective Genetic Algorithm (MOGA) and the Multi-Objective Tabu Search (MOTS) techniques. Ultimately, heavily computationally expensive industrial design cases can be realised within the presented framework that could not be investigated before. ©2012 AIAA.
Resumo:
We investigate performance bounds for feedback control of distributed plants where the controller can be centralized (i.e. it has access to measurements from the whole plant), but sensors only measure differences between neighboring subsystem outputs. Such "distributed sensing" can be a technological necessity in applications where system size exceeds accuracy requirements by many orders of magnitude. We formulate how distributed sensing generally limits feedback performance robust to measurement noise and to model uncertainty, without assuming any controller restrictions (among others, no "distributed control" restriction). A major practical consequence is the necessity to cut down integral action on some modes. We particularize the results to spatially invariant systems and finally illustrate implications of our developments for stabilizing the segmented primary mirror of the European Extremely Large Telescope. © 2013 Elsevier Ltd. All rights reserved.