914 resultados para Cutting speed
Resumo:
Turbogenerating is a form of turbocompounding whereby a Turbogenerator is placed in the exhaust stream of an internal combustion engine. The Turbogenerator converts a portion of the expelled energy in the exhaust gas into electricity which can then be used to supplement the crankshaft power. Previous investigations have shown how the addition of a Turbogenerator can increase the system efficiency by up to 9%. However, these investigations pertain to the engine system operating at one fixed engine speed. The purpose of this paper is to investigate how the system and in particular the Turbogenerator operate during engine speed transients. On turbocharged engines, turbocharger lag is an issue. With the addition of a Turbogenerator, these issues can be somewhat alleviated. This is done by altering the speed at which the Turbogenerator operates during the engine’s speed transient. During the transients, the Turbogenerator can be thought to act in a similar manner to a variable geometry turbine where its speed can cause a change in the turbocharger turbine’s pressure ratio. This paper shows that by adding a Turbogenerator to a turbocharged engine the transient performance can be enhanced. This enhancement is shown by comparing the turbogenerated engine to a similar turbocharged engine. When comparing the two engines, it can be seen that the addition of a Turbogenerator can reduce the time taken to reach full power by up to 7% whilst at the same time, improve overall efficiency by 7.1% during the engine speed transient.
Resumo:
Flow processing is a fundamental element of stateful traffic classification and it has been recognized as an essential factor for delivering today’s application-aware network operations and security services. The basic function within a flow processing engine is to search and maintain a flow table, create new flow entries if no entry matches and associate each entry with flow states and actions for future queries. Network state information on a per-flow basis must be managed in an efficient way to enable Ethernet frame transmissions at 40 Gbit/s (Gbps) and 100 Gbps in the near future. This paper presents a hardware solution of flow state management for implementing large-scale flow tables on popular computer memories using DDR3 SDRAMs. Working with a dedicated flow lookup table at over 90 million lookups per second, the proposed system is able to manage 512-bit state information at run time.
Resumo:
The dynamics of predator-prey pursuit appears complex, making the development of a framework explaining predator and prey strategies problematic. We develop a model for terrestrial, cursorial predators to examine how animal mass modulates predator and prey trajectories and affects best strategies for both parties. We incorporated the maximum speed-mass relationship with an explanation of why larger animals should have greater turn radii; the forces needed to turn scale linearly with mass whereas the maximum forces an animal can exert scale to a 2/3 power law. This clarifies why in a meta-analysis, we found a preponderance of predator/prey mass ratios that minimized the turn radii of predators compared to their prey. It also explained why acceleration data from wild cheetahs pursuing different prey showed different cornering behaviour with prey type. The outcome of predator prey pursuits thus depends critically on mass effects and the ability of animals to time turns precisely.
Resumo:
Molecular Dynamics Simulations (MDS) are constantly being used to make important contributions to our fundamental understanding of material behaviour, at the atomic scale, for a variety of thermodynamic processes. This chapter shows that molecular dynamics simulation is a robust numerical analysis tool in addressing a range of complex nanofinishing (machining) problems that are otherwise difficult or impossible to understand using other methods. For example the mechanism of nanometric cutting of silicon carbide is influenced by a number of variables such as machine tool performance, machining conditions, material properties, and cutting tool performance (material microstructure and physical geometry of the contact) and all these variables cannot be monitored online through experimental examination. However, these could suitably be studied using an advanced simulation based approach such as MDS. This chapter details how MD simulation can be used as a research and commercial tool to understand key issues of ultra precision manufacturing research problems and a specific case was addressed by studying diamond machining of silicon carbide. While this is appreciable, there are a lot of challenges and opportunities in this fertile area. For example, the world of MD simulations is dependent on present day computers and the accuracy and reliability of potential energy functions [109]. This presents a limitation: Real-world scale simulation models are yet to be developed. The simulated length and timescales are far shorter than the experimental ones which couples further with the fact that contact loading simulations are typically done in the speed range of a few hundreds of m/sec against the experimental speed of typically about 1 m/sec [17]. Consequently, MD simulations suffer from the spurious effects of high cutting speeds and the accuracy of the simulation results has yet to be fully explored. The development of user-friendly software could help facilitate molecular dynamics as an integral part of computer-aided design and manufacturing to tackle a range of machining problems from all perspectives, including materials science (phase of the material formed due to the sub-surface deformation layer), electronics and optics (properties of the finished machined surface due to the metallurgical transformation in comparison to the bulk material), and mechanical engineering (extent of residual stresses in the machined component) [110]. Overall, this chapter provided key information concerning diamond machining of SiC which is classed as hard, brittle material. From the analysis presented in the earlier sections, MD simulation has helped in understanding the effects of crystal anisotropy in nanometric cutting of 3C-SiC by revealing the atomic-level deformation mechanisms for different crystal orientations and cutting directions. In addition to this, the MD simulation revealed that the material removal mechanism on the (111) surface of 3C-SiC (akin to diamond) is dominated by cleavage. These understandings led to the development of a new approach named the “surface defect machining” method which has the potential to be more effective to implement than ductile mode micro laser assisted machining or conventional nanometric cutting.
Resumo:
Molecular dynamics (MD) simulation was carried out to acquire an in-depth understanding of the flow behaviour of single crystal silicon during nanometric cutting on three principal crystallographic planes and at different cutting temperatures. The key findings were that (i) the substrate material underneath the cutting tool was observed for the first time to experience a rotational flow akin to fluids at all the tested temperatures up to 1200 K. (ii) The degree of flow in terms of vorticity was found higher on the (1 1 1) crystal plane signifying better machinability on this orientation in accord with the current pool of knowledge (iii) an increase in the machining temperature reduces the springback effect and thereby the elastic recovery and (iv) the cutting orientation and the cutting temperature showed significant dependence on the location of the stagnation region in the cutting zone of the substrate.
Resumo:
This paper describes the hydrogeological processes which caused unexpected instability and quick conditions during the excavation of a 25m deep cutting through a drumlin in County Down, Northern Ireland. A conceptual hydrogeological model of the cutting, based on pore pressures monitored during and after the excavation demonstrates how quick conditions at the toe of the cutting caused liquefaction of the till. Stability of the cutting was re-established by draining the highly permeable, weathered Greywacke which underlies the drumlin, through the use of a deep toe drain. In spite of this drainage, the cutting was only marginally stable due to the presence of a low permeability zone in the till above the bedrock which limits the reduction of elevated pore pressures within the upper to mid-depths of the drumlin. The factor of safety has been further improved by the addition of vertical relief drains at the crest and berm of the cutting to relieve the pore-pressures within the upper till by intercepting the weathered bedrock. The paper also highlights the importance of carrying out an adequate site investigation compliant with Eurocode 7 and additional monitoring in excavations in stiff, low permeability till.
Resumo:
The speed of manufacturing processes today depends on a trade-off between the physical processes of production, the wider system that allows these processes to operate and the co-ordination of a supply chain in the pursuit of meeting customer needs. Could the speed of this activity be doubled? This paper explores this hypothetical question, starting with examination of a diverse set of case studies spanning the activities of manufacturing. This reveals that the constraints on increasing manufacturing speed have some common themes, and several of these are examined in more detail, to identify absolute limits to performance. The physical processes of production are constrained by factors such as machine stiffness, actuator acceleration, heat transfer and the delivery of fluids, and for each of these, a simplified model is used to analyse the gap between current and limiting performance. The wider systems of production require the co-ordination of resources and push at the limits of human biophysical and cognitive limits. Evidence about these is explored and related to current practice. Out of this discussion, five promising innovations are explored to show examples of how manufacturing speed is increasing—with line arrays of point actuators, parallel tools, tailored application of precision, hybridisation and task taxonomies. The paper addresses a broad question which could be pursued by a wider community and in greater depth, but even this first examination suggests the possibility of unanticipated innovations in current manufacturing practices.
Resumo:
Roadside safety barriers designs are tested with passenger cars in Europe using standard EN1317 in which the impact angle for normal, high and very high containment level tests is 20°. In comparison to EN1317, the US standard MASH has higher impact angles for cars and pickups (25°) and different vehicle masses. Studies in Europe (RISER) and the US have shown values for the 90th percentile impact angle of 30°–34°. Thus, the limited evidence available suggests that the 20° angle applied in EN 1317 may be too low.
The first goal of this paper is to use the US NCHRP database (Project NCHRP 17–22) to assess the distribution of impact angle and collision speed in recent ROR accidents. Second, based on the findings of the statistical analysis and on analysis of impact angles and speeds in the literature, an LS-DYNA finite element analysis was carried out to evaluate the normal containment level of concrete barriers in non-standard collisions. The FE model was validated against a crash test of a portable concrete barrier carried out at the UK Transport Research Laboratory (TRL).
The accident data analysis for run-off road accidents indicates that a substantial proportion of accidents have an impact angle in excess of 20°. The baseline LS-DYNA model showed good comparison with experimental acceleration severity index (ASI) data and the parametric analysis indicates a very significant influence of impact angle on ASI. Accordingly, a review of European run-off road accidents and the configuration of EN 1317 should be performed.
Resumo:
A integridade do sinal em sistemas digitais interligados de alta velocidade, e avaliada através da simulação de modelos físicos (de nível de transístor) é custosa de ponto vista computacional (por exemplo, em tempo de execução de CPU e armazenamento de memória), e exige a disponibilização de detalhes físicos da estrutura interna do dispositivo. Esse cenário aumenta o interesse pela alternativa de modelação comportamental que descreve as características de operação do equipamento a partir da observação dos sinais eléctrico de entrada/saída (E/S). Os interfaces de E/S em chips de memória, que mais contribuem em carga computacional, desempenham funções complexas e incluem, por isso, um elevado número de pinos. Particularmente, os buffers de saída são obrigados a distorcer os sinais devido à sua dinâmica e não linearidade. Portanto, constituem o ponto crítico nos de circuitos integrados (CI) para a garantia da transmissão confiável em comunicações digitais de alta velocidade. Neste trabalho de doutoramento, os efeitos dinâmicos não-lineares anteriormente negligenciados do buffer de saída são estudados e modulados de forma eficiente para reduzir a complexidade da modelação do tipo caixa-negra paramétrica, melhorando assim o modelo standard IBIS. Isto é conseguido seguindo a abordagem semi-física que combina as características de formulação do modelo caixa-negra, a análise dos sinais eléctricos observados na E/S e propriedades na estrutura física do buffer em condições de operação práticas. Esta abordagem leva a um processo de construção do modelo comportamental fisicamente inspirado que supera os problemas das abordagens anteriores, optimizando os recursos utilizados em diferentes etapas de geração do modelo (ou seja, caracterização, formulação, extracção e implementação) para simular o comportamento dinâmico não-linear do buffer. Em consequência, contributo mais significativo desta tese é o desenvolvimento de um novo modelo comportamental analógico de duas portas adequado à simulação em overclocking que reveste de um particular interesse nas mais recentes usos de interfaces de E/S para memória de elevadas taxas de transmissão. A eficácia e a precisão dos modelos comportamentais desenvolvidos e implementados são qualitativa e quantitativamente avaliados comparando os resultados numéricos de extracção das suas funções e de simulação transitória com o correspondente modelo de referência do estado-da-arte, IBIS.
Resumo:
The Trembling Line is a film and multi-channel sound installation exploring the visual and acoustic echoes between decipherable musical gestures and abstract patterning, orchestral swells and extreme high-speed slow-motion close-ups of strings and percussion. It features a score by Leo Grant and a newly devised multichannel audio system by the Institute of Sound and Vibration Research, University of Southampton. The multi-channel speaker array is devised as an intimate sound spatialisation system in which each element of sound can be pried apart and reconfigured, to create a dynamically disorienting sonic experience. It becomes the inside of a musical instrument, an acoustic envelope or cage of sorts, through which viewers are invited to experience the film and generate cross-sensory connections and counterpoints between the sound and the visuals. Funded by a Leverhulme Artist-in-Residence Award and John Hansard Gallery, with support from ISVR and the Music Department, University of Southampton. The project provided a rare opportunity to work creatively with new cutting edge developments in sound distribution devised by ISVR, devising a new speaker array, a multi- channel surround listening sphere which spatialises the auditory experience. The sphere is currently used by ISVR for outreach and teaching purposes, and has enables future collaborations between music staff and students at Southampton University and staff and ISVR. Exhibitions: Solo exhibition at John Hansard Gallery, Southampton (Dec 2015-Jan 2016), across 5 rooms, including a retrospective of five previous film-works and a new series of photographic stills. Public lectures: two within the gallery. Reviews and interviews: Art Monthly, Studio International, The Quietus, The Wire Magazine.
Resumo:
Thought speed and variability are purportedly common features of specific psychological states, such as mania and anxiety. The present study explored the independent and combinational influence of these variables upon condition-specific symptoms and affective state, as proposed by Pronin and Jacobs’ (Perspect Psychol Sci, 3:461–485, 2008) theory of mental motion. A general population sample was recruited online (N = 263). Participants completed a thought speed and variability manipulation task, inducing a combination of fast/slow and varied/repetitive thought. Change in mania and anxiety symptoms was assessed through direct self-reported symptom levels and indirect, processing bias assessment (threat interpretation). Results indicated that fast and varied thought independently increased self-reported mania symptoms. Affect was significantly less positive and more negative during slow thought. No change in anxiety symptoms or threat interpretation was found between manipulation conditions. No evidence for the proposed combinational influence of speed and variability was found. Implications and avenues for therapeutic intervention are discussed.
Resumo:
Turbo codes experience a significant decoding delay because of the iterative nature of the decoding algorithms, the high number of metric computations and the complexity added by the (de)interleaver. The extrinsic information is exchanged sequentially between two Soft-Input Soft-Output (SISO) decoders. Instead of this sequential process, a received frame can be divided into smaller windows to be processed in parallel. In this paper, a novel parallel processing methodology is proposed based on the previous parallel decoding techniques. A novel Contention-Free (CF) interleaver is proposed as part of the decoding architecture which allows using extrinsic Log-Likelihood Ratios (LLRs) immediately as a-priori LLRs to start the second half of the iterative turbo decoding. The simulation case studies performed in this paper show that our parallel decoding method can provide %80 time saving compared to the standard decoding and %30 time saving compared to the previous parallel decoding methods at the expense of 0.3 dB Bit Error Rate (BER) performance degradation.
Resumo:
Thesis (Ph.D.)--University of Washington, 2015