35 resultados para Combined Web crippling and Flange Crushing
em Aston University Research Archive
Resumo:
The manufacturing industry faces many challenges such as reducing time-to-market and cutting costs. In order to meet these increasing demands, effective methods are need to support the early product development stages by bridging the gap of communicating early design ideas and the evaluation of manufacturing performance. This paper introduces methods of linking design and manufacturing domains using disparate technologies. The combined technologies include knowledge management supporting for product lifecycle management systems, Enterprise Resource Planning (ERP) systems, aggregate process planning systems, workflow management and data exchange formats. A case study has been used to demonstrate the use of these technologies, illustrated by adding manufacturing knowledge to generate alternative early process plan which are in turn used by an ERP system to obtain and optimise a rough-cut capacity plan. Copyright © 2010 Inderscience Enterprises Ltd.
Resumo:
Data Envelopment Analysis (DEA) is one of the most widely used methods in the measurement of the efficiency and productivity of Decision Making Units (DMUs). DEA for a large dataset with many inputs/outputs would require huge computer resources in terms of memory and CPU time. This paper proposes a neural network back-propagation Data Envelopment Analysis to address this problem for the very large scale datasets now emerging in practice. Neural network requirements for computer memory and CPU time are far less than that needed by conventional DEA methods and can therefore be a useful tool in measuring the efficiency of large datasets. Finally, the back-propagation DEA algorithm is applied to five large datasets and compared with the results obtained by conventional DEA.
Resumo:
We present a novel distributed sensor that utilizes the temperature and strain dependence of the frequency at which the Brillouin loss is maximized in the interaction between a cw laser and a pulsed laser. With a 22-km sensing length, a strain resolution of 20 µ? and a temperature resolution of 2°C have been achieved with a spatial resolution of 5 m.
Resumo:
We present a novel distributed sensor that utilizes the temperature and strain dependence of the frequency at which the Brillouin loss is maximized in the interaction between a cw laser and a pulsed laser. With a 22-km sensing length, a strain resolution of 20 µ? and a temperature resolution of 2°C have been achieved with a spatial resolution of 5 m.
Resumo:
This thesis examines experimentally and theoretically the behaviour and ultimate strength of rectangular reinforced concrete members under combined torsion, shear and bending. The experimental investigation consists of the test results of 38 longitudinally and transversely reinforced concrete beams subjected to combined loads, ten beams of which were tested under pure torsion and self-weight. The behaviour of each test beam from application of the first increment of load until failure is presented. The effects of concrete strength, spacing of the stirrups, the amount of longitudinal steel and the breadth of the section on the ultimate torsional capacity are investigated. Based on the skew-bending mechanism, compatibility, and linear stress-strain relationship for the concrete and the steel, simple rational equations are derived for the three principal modes of failure for the following four types of failure observed in the tests: TYPE I Yielding the reinforcement, at failure, before crushing the concrete. TYPE II Yielding of the web steel only, at failure, before crushing the concrete. TYPE III Yielding of the longitudinal steel only, at failure, before crushing the concrete. TYPE IV Crushing of the concrete, at failure, before yielding of any of the reinforcement.
Resumo:
Recent developments in service-oriented and distributed computing have created exciting opportunities for the integration of models in service chains to create the Model Web. This offers the potential for orchestrating web data and processing services, in complex chains; a flexible approach which exploits the increased access to products and tools, and the scalability offered by the Web. However, the uncertainty inherent in data and models must be quantified and communicated in an interoperable way, in order for its effects to be effectively assessed as errors propagate through complex automated model chains. We describe a proposed set of tools for handling, characterizing and communicating uncertainty in this context, and show how they can be used to 'uncertainty- enable' Web Services in a model chain. An example implementation is presented, which combines environmental and publicly-contributed data to produce estimates of sea-level air pressure, with estimates of uncertainty which incorporate the effects of model approximation as well as the uncertainty inherent in the observational and derived data.
Resumo:
This work is concerned with the behaviour of thin webbed rolled steel joists or universal beams when they are subjected to concentrated loads applied to the flanges. The prime concern is the effect of high direct stresses causing web failure in a small region of the beam. The review shows that although many tests have been carried out on rolled steel beams and built up girders, no series of tests has restricted the number of variables involved to enable firm conclusions to be drawn. The results of 100 tests on several different rolled steel universal beam sections having various types of loading conditions are presented. The majority of the beams are tested by loading with two opposite loads, thus eliminating the effects of bending and shear, except for a small number of beams which are tested simply supported on varying spans. The test results are first compared with the present design standard (BS 449) and it is shown that the British Standard is very conservative for most of the loading conditions included in the tests but is unsafe for others. Three possible failure modes are then considered, overall elastic buckling of the web, flexural yielding of the web due to large out of plane deflexions and local crushing of the material at the junction of the web and the root fillets. Each mode is considered theoretically and developed to establish the main variables, thus enabling a comparison to be made with the test results. It is shown that all three failure modes have a particular relevance for individual loading conditions, but that determining the failure load given the beam size and the loading conditions is very difficult in certain instances. Finally it is shown that there are some empirical relationships between the failure loads and the type of loading for various beam serial sizes.
Resumo:
We report an extension of the procedure devised by Weinstein and Shanks (Memory & Cognition 36:1415-1428, 2008) to study false recognition and priming of pictures. Participants viewed scenes with multiple embedded objects (seen items), then studied the names of these objects and the names of other objects (read items). Finally, participants completed a combined direct (recognition) and indirect (identification) memory test that included seen items, read items, and new items. In the direct test, participants recognized pictures of seen and read items more often than new pictures. In the indirect test, participants' speed at identifying those same pictures was improved for pictures that they had actually studied, and also for falsely recognized pictures whose names they had read. These data provide new evidence that a false-memory induction procedure can elicit memory-like representations that are difficult to distinguish from "true" memories of studied pictures. © 2012 Psychonomic Society, Inc.
Resumo:
Ontologies have become the knowledge representation medium of choice in recent years for a range of computer science specialities including the Semantic Web, Agents, and Bio-informatics. There has been a great deal of research and development in this area combined with hype and reaction. This special issue is concerned with the limitations of ontologies and how these can be addressed, together with a consideration of how we can circumvent or go beyond these constraints. The introduction places the discussion in context and presents the papers included in this issue.
Resumo:
The main argument of this paper is that Natural Language Processing (NLP) does, and will continue to, underlie the Semantic Web (SW), including its initial construction from unstructured sources like the World Wide Web (WWW), whether its advocates realise this or not. Chiefly, we argue, such NLP activity is the only way up to a defensible notion of meaning at conceptual levels (in the original SW diagram) based on lower level empirical computations over usage. Our aim is definitely not to claim logic-bad, NLP-good in any simple-minded way, but to argue that the SW will be a fascinating interaction of these two methodologies, again like the WWW (which has been basically a field for statistical NLP research) but with deeper content. Only NLP technologies (and chiefly information extraction) will be able to provide the requisite RDF knowledge stores for the SW from existing unstructured text databases in the WWW, and in the vast quantities needed. There is no alternative at this point, since a wholly or mostly hand-crafted SW is also unthinkable, as is a SW built from scratch and without reference to the WWW. We also assume that, whatever the limitations on current SW representational power we have drawn attention to here, the SW will continue to grow in a distributed manner so as to serve the needs of scientists, even if it is not perfect. The WWW has already shown how an imperfect artefact can become indispensable.
Resumo:
Background & aims It has been suggested that retinal lutein may improve visual acuity for images that are illuminated by white light. Our aim was to determine the effect of a lutein and antioxidant dietary supplement on visual function. Methods A prospective, 9- and 18-month, double-masked randomised controlled trial. For the 9-month trial, 46 healthy participants were randomised (using a random number generator) to placebo (n=25) or active (n=21) groups. Twenty-nine of these subjects went on to complete 18 months of supplementation, 15 from the placebo group, and 14 from the active group. The active group supplemented daily with 6mg lutein combined with vitamins and minerals. Outcome measures were distance and near visual acuity, contrast sensitivity, and photostress recovery time. The study had 80% power at the 5% significance level for each outcome measure. Data were collected at baseline, 9, and 18 months. Results There were no statistically significant differences between groups for any of the outcome measures over 9 or 18 months. Conclusion There was no evidence of effect of 9 or 18 months of daily supplementation with a lutein-based nutritional supplement on visual function in this group of people with healthy eyes. ISRCTN78467674.
Resumo:
Objective: The aim of the study is to determine the effect of lutein combined with vitamin and mineral supplementation on contrast sensitivity in people with age-related macular disease (ARMD). Design: A prospective, 9-month, double-masked randomized controlled trial. Setting: Aston University, Birmingham, UK and a UK optometric clinical practice. Subjects: Age-related maculopathy (ARM) and atrophic age-related macular degeneration (AMD) participants were randomized (using a random number generator) to either placebo (n = 10) or active (n=15) groups. Three of the placebo group and two of the active group dropped out. Interventions: The active group supplemented daily with 6 mg lutein combined with vitamins and minerals. The outcome measure was contrast sensitivity (CS) measured using the Pelli-Robson chart, for which the study had 80% power at the 5% significance level to detect a change of 0.3log units. Results: The CS score increased by 0.07 ± 0.07 and decreased by 0.02 ± 0.18 log units for the placebo and active groups, respectively. The difference between these values is not statistically significant (z = 0.903, P = 0.376). Conclusion: The results suggest that 6 mg of lutein supplementation in combination with other antioxidants is not beneficial for this group. Further work is required to establish optimum dosage levels.
Resumo:
When constructing and using environmental models, it is typical that many of the inputs to the models will not be known perfectly. In some cases, it will be possible to make observations, or occasionally physics-based uncertainty propagation, to ascertain the uncertainty on these inputs. However, such observations are often either not available or even possible, and another approach to characterising the uncertainty on the inputs must be sought. Even when observations are available, if the analysis is being carried out within a Bayesian framework then prior distributions will have to be specified. One option for gathering or at least estimating this information is to employ expert elicitation. Expert elicitation is well studied within statistics and psychology and involves the assessment of the beliefs of a group of experts about an uncertain quantity, (for example an input / parameter within a model), typically in terms of obtaining a probability distribution. One of the challenges in expert elicitation is to minimise the biases that might enter into the judgements made by the individual experts, and then to come to a consensus decision within the group of experts. Effort is made in the elicitation exercise to prevent biases clouding the judgements through well-devised questioning schemes. It is also important that, when reaching a consensus, the experts are exposed to the knowledge of the others in the group. Within the FP7 UncertWeb project (http://www.uncertweb.org/), there is a requirement to build a Webbased tool for expert elicitation. In this paper, we discuss some of the issues of building a Web-based elicitation system - both the technological aspects and the statistical and scientific issues. In particular, we demonstrate two tools: a Web-based system for the elicitation of continuous random variables and a system designed to elicit uncertainty about categorical random variables in the setting of landcover classification uncertainty. The first of these examples is a generic tool developed to elicit uncertainty about univariate continuous random variables. It is designed to be used within an application context and extends the existing SHELF method, adding a web interface and access to metadata. The tool is developed so that it can be readily integrated with environmental models exposed as web services. The second example was developed for the TREES-3 initiative which monitors tropical landcover change through ground-truthing at confluence points. It allows experts to validate the accuracy of automated landcover classifications using site-specific imagery and local knowledge. Experts may provide uncertainty information at various levels: from a general rating of their confidence in a site validation to a numerical ranking of the possible landcover types within a segment. A key challenge in the web based setting is the design of the user interface and the method of interacting between the problem owner and the problem experts. We show the workflow of the elicitation tool, and show how we can represent the final elicited distributions and confusion matrices using UncertML, ready for integration into uncertainty enabled workflows.We also show how the metadata associated with the elicitation exercise is captured and can be referenced from the elicited result, providing crucial lineage information and thus traceability in the decision making process.
Resumo:
The aim of this work has been to investigate the behaviour of a continuous rotating annular chromatograph (CRAC) under a combined biochemical reaction and separation duty. Two biochemical reactions have been employed, namely the inversion of sucrose to glucose and fructose in the presence of the enzyme invertase and the saccharification of liquefied starch to maltose and dextrin using the enzyme maltogenase. Simultaneous biochemical reaction and separation has been successfully carried out for the first time in a CRAC by inverting sucrose to fructose and glucose using the enzyme invertase and collecting continuously pure fractions of glucose and fructose from the base of the column. The CRAC was made of two concentric cylinders which form an annulus 140 cm long by 1.2 cm wide, giving an annular space of 14.5 dm3. The ion exchange resin used was an industrial grade calcium form Dowex 50W-X4 with a mean diameter of 150 microns. The mobile phase used was deionised and dearated water and contained the appropriate enzyme. The annular column was slowly rotated at speeds of up to 240°h-1 while the sucrose substrate was fed continuously through a stationary feed pipe to the top of the resin bed. A systematic investigation of the factors affecting the performance of the CRAC under simultaneous biochemical reaction and separation conditions was carried out by employing a factorial experimental procedure. The main factors affecting the performance of the system were found to be the feed rate, feed concentrations and eluent rate. Results from the experiments indicated that complete conversion could be achieved for feed concentrations of up to 50% w/v sucrose and at feed throughputs of up to 17.2 kg sucrose per m3 resin/h. The second enzymic reaction, namely the saccharification of liquefied starch to maltose employing the enzyme maltogenase has also been successfully carried out on a CRAC. Results from the experiments using soluble potato starch showed that conversions of up to 79% were obtained for a feed concentration of 15.5% w/v at a feed flowrate of 400 cm3/h. The product maltose obtained was over 95% pure. Mathematical modelling and computer simulation of the sucrose inversion system has been carried out. A finite difference method was used to solve the partial differential equations and the simulation results showed good agreement with the experimental results obtained.
Resumo:
The work presented in this thesis is divided into two distinct sections. In the first, the functional neuroimaging technique of Magnetoencephalography (MEG) is described and a new technique is introduced for accurate combination of MEG and MRI co-ordinate systems. In the second part of this thesis, MEG and the analysis technique of SAM are used to investigate responses of the visual system in the context of functional specialisation within the visual cortex. In chapter one, the sources of MEG signals are described, followed by a brief description of the necessary instrumentation for accurate MEG recordings. This chapter is concluded by introducing the forward and inverse problems of MEG, techniques to solve the inverse problem, and a comparison of MEG with other neuroimaging techniques. Chapter two provides an important contribution to the field of research with MEG. Firstly, it is described how MEG and MRI co-ordinate systems are combined for localisation and visualisation of activated brain regions. A previously used co-registration methods is then described, and a new technique is introduced. In a series of experiments, it is demonstrated that using fixed fiducial points provides a considerable improvement in the accuracy and reliability of co-registration. Chapter three introduces the visual system starting from the retina and ending with the higher visual rates. The functions of the magnocellular and the parvocellular pathways are described and it is shown how the parallel visual pathways remain segregated throughout the visual system. The structural and functional organisation of the visual cortex is then described. Chapter four presents strong evidence in favour of the link between conscious experience and synchronised brain activity. The spatiotemporal responses of the visual cortex are measured in response to specific gratings. It is shown that stimuli that induce visual discomfort and visual illusions share their physical properties with those that induce highly synchronised gamma frequency oscillations in the primary visual cortex. Finally chapter five is concerned with localization of colour in the visual cortex. In this first ever use of Synthetic Aperture Magnetometry to investigate colour processing in the visual cortex, it is shown that in response to isoluminant chromatic gratings, the highest magnitude of cortical activity arise from area V2.