934 resultados para speed discounting
Resumo:
Conversion of xylose to l-lactate was carried out by Lactococcus lactis IO-1 using an electrodialysis bioprocess (ED-BP). At 50 g l -1 xylose, the ED-BP was already complete in half the time (32 h) taken by the control culture without electrodialysis (>60 h). At 80 g l -1 xylose, the control culture was unable to consume >50 g l -1 xylose, whereas the ED-BP consumed 75 g l -1 xylose in 45 h. Thus, the simultaneous removal of lactate and acetate by ED-BP was associated with high-speed l-lactate production, increased xylose consumption and an increased l-lactate production. Copyright (C) 1998 Elsevier Science B.V.
Resumo:
Critical decisions are made by decision-makers throughout
the life-cycle of large-scale projects. These decisions are crucial as they
have a direct impact upon the outcome and the success of projects. To aid
decision-makers in the decision making process we present an evidential
reasoning framework. This approach utilizes the Dezert-Smarandache
theory to fuse heterogeneous evidence sources that suffer from levels
of uncertainty, imprecision and conflicts to provide beliefs for decision
options. To analyze the impact of source reliability and priority upon
the decision making process, a reliability discounting technique and a
priority discounting technique, are applied. A maximal consistent subset
is constructed to aid in dening where discounting should be applied.
Application of the evidential reasoning framework is illustrated using a
case study based in the Aerospace domain.
Resumo:
Very high speed and low area hardware architectures of the SHACAL-1 encryption algorithm are presented in this paper. The SHACAL algorithm was a submission to the New European Schemes for Signatures, Integrity and Encryption (NESSIE) project and it is based on the SHA-1 hash algorithm. To date, there have been no performance metrics published on hardware implementations of this algorithm. A fully pipelined SHACAL-1 encryption architecture is described in this paper and when implemented on a Virtex-II X2V4000 FPGA device, it runs at a throughput of 17 Gbps. A fully pipelined decryption architecture achieves a speed of 13 Gbps when implemented on the same device. In addition, iterative architectures of the algorithm are presented. The SHACAL-1 decryption algorithm is derived and also presented in this paper, since it was not provided in the submission to NESSIE. © Springer-Verlag Berlin Heidelberg 2003.
Resumo:
Turbogenerating is a form of turbocompounding whereby a Turbogenerator is placed in the exhaust stream of an internal combustion engine. The Turbogenerator converts a portion of the expelled energy in the exhaust gas into electricity which can then be used to supplement the crankshaft power. Previous investigations have shown how the addition of a Turbogenerator can increase the system efficiency by up to 9%. However, these investigations pertain to the engine system operating at one fixed engine speed. The purpose of this paper is to investigate how the system and in particular the Turbogenerator operate during engine speed transients. On turbocharged engines, turbocharger lag is an issue. With the addition of a Turbogenerator, these issues can be somewhat alleviated. This is done by altering the speed at which the Turbogenerator operates during the engine’s speed transient. During the transients, the Turbogenerator can be thought to act in a similar manner to a variable geometry turbine where its speed can cause a change in the turbocharger turbine’s pressure ratio. This paper shows that by adding a Turbogenerator to a turbocharged engine the transient performance can be enhanced. This enhancement is shown by comparing the turbogenerated engine to a similar turbocharged engine. When comparing the two engines, it can be seen that the addition of a Turbogenerator can reduce the time taken to reach full power by up to 7% whilst at the same time, improve overall efficiency by 7.1% during the engine speed transient.
Resumo:
Flow processing is a fundamental element of stateful traffic classification and it has been recognized as an essential factor for delivering today’s application-aware network operations and security services. The basic function within a flow processing engine is to search and maintain a flow table, create new flow entries if no entry matches and associate each entry with flow states and actions for future queries. Network state information on a per-flow basis must be managed in an efficient way to enable Ethernet frame transmissions at 40 Gbit/s (Gbps) and 100 Gbps in the near future. This paper presents a hardware solution of flow state management for implementing large-scale flow tables on popular computer memories using DDR3 SDRAMs. Working with a dedicated flow lookup table at over 90 million lookups per second, the proposed system is able to manage 512-bit state information at run time.
Resumo:
The dynamics of predator-prey pursuit appears complex, making the development of a framework explaining predator and prey strategies problematic. We develop a model for terrestrial, cursorial predators to examine how animal mass modulates predator and prey trajectories and affects best strategies for both parties. We incorporated the maximum speed-mass relationship with an explanation of why larger animals should have greater turn radii; the forces needed to turn scale linearly with mass whereas the maximum forces an animal can exert scale to a 2/3 power law. This clarifies why in a meta-analysis, we found a preponderance of predator/prey mass ratios that minimized the turn radii of predators compared to their prey. It also explained why acceleration data from wild cheetahs pursuing different prey showed different cornering behaviour with prey type. The outcome of predator prey pursuits thus depends critically on mass effects and the ability of animals to time turns precisely.
Resumo:
The speed of manufacturing processes today depends on a trade-off between the physical processes of production, the wider system that allows these processes to operate and the co-ordination of a supply chain in the pursuit of meeting customer needs. Could the speed of this activity be doubled? This paper explores this hypothetical question, starting with examination of a diverse set of case studies spanning the activities of manufacturing. This reveals that the constraints on increasing manufacturing speed have some common themes, and several of these are examined in more detail, to identify absolute limits to performance. The physical processes of production are constrained by factors such as machine stiffness, actuator acceleration, heat transfer and the delivery of fluids, and for each of these, a simplified model is used to analyse the gap between current and limiting performance. The wider systems of production require the co-ordination of resources and push at the limits of human biophysical and cognitive limits. Evidence about these is explored and related to current practice. Out of this discussion, five promising innovations are explored to show examples of how manufacturing speed is increasing—with line arrays of point actuators, parallel tools, tailored application of precision, hybridisation and task taxonomies. The paper addresses a broad question which could be pursued by a wider community and in greater depth, but even this first examination suggests the possibility of unanticipated innovations in current manufacturing practices.
Resumo:
Roadside safety barriers designs are tested with passenger cars in Europe using standard EN1317 in which the impact angle for normal, high and very high containment level tests is 20°. In comparison to EN1317, the US standard MASH has higher impact angles for cars and pickups (25°) and different vehicle masses. Studies in Europe (RISER) and the US have shown values for the 90th percentile impact angle of 30°–34°. Thus, the limited evidence available suggests that the 20° angle applied in EN 1317 may be too low.
The first goal of this paper is to use the US NCHRP database (Project NCHRP 17–22) to assess the distribution of impact angle and collision speed in recent ROR accidents. Second, based on the findings of the statistical analysis and on analysis of impact angles and speeds in the literature, an LS-DYNA finite element analysis was carried out to evaluate the normal containment level of concrete barriers in non-standard collisions. The FE model was validated against a crash test of a portable concrete barrier carried out at the UK Transport Research Laboratory (TRL).
The accident data analysis for run-off road accidents indicates that a substantial proportion of accidents have an impact angle in excess of 20°. The baseline LS-DYNA model showed good comparison with experimental acceleration severity index (ASI) data and the parametric analysis indicates a very significant influence of impact angle on ASI. Accordingly, a review of European run-off road accidents and the configuration of EN 1317 should be performed.
Resumo:
A integridade do sinal em sistemas digitais interligados de alta velocidade, e avaliada através da simulação de modelos físicos (de nível de transístor) é custosa de ponto vista computacional (por exemplo, em tempo de execução de CPU e armazenamento de memória), e exige a disponibilização de detalhes físicos da estrutura interna do dispositivo. Esse cenário aumenta o interesse pela alternativa de modelação comportamental que descreve as características de operação do equipamento a partir da observação dos sinais eléctrico de entrada/saída (E/S). Os interfaces de E/S em chips de memória, que mais contribuem em carga computacional, desempenham funções complexas e incluem, por isso, um elevado número de pinos. Particularmente, os buffers de saída são obrigados a distorcer os sinais devido à sua dinâmica e não linearidade. Portanto, constituem o ponto crítico nos de circuitos integrados (CI) para a garantia da transmissão confiável em comunicações digitais de alta velocidade. Neste trabalho de doutoramento, os efeitos dinâmicos não-lineares anteriormente negligenciados do buffer de saída são estudados e modulados de forma eficiente para reduzir a complexidade da modelação do tipo caixa-negra paramétrica, melhorando assim o modelo standard IBIS. Isto é conseguido seguindo a abordagem semi-física que combina as características de formulação do modelo caixa-negra, a análise dos sinais eléctricos observados na E/S e propriedades na estrutura física do buffer em condições de operação práticas. Esta abordagem leva a um processo de construção do modelo comportamental fisicamente inspirado que supera os problemas das abordagens anteriores, optimizando os recursos utilizados em diferentes etapas de geração do modelo (ou seja, caracterização, formulação, extracção e implementação) para simular o comportamento dinâmico não-linear do buffer. Em consequência, contributo mais significativo desta tese é o desenvolvimento de um novo modelo comportamental analógico de duas portas adequado à simulação em overclocking que reveste de um particular interesse nas mais recentes usos de interfaces de E/S para memória de elevadas taxas de transmissão. A eficácia e a precisão dos modelos comportamentais desenvolvidos e implementados são qualitativa e quantitativamente avaliados comparando os resultados numéricos de extracção das suas funções e de simulação transitória com o correspondente modelo de referência do estado-da-arte, IBIS.
Resumo:
Thought speed and variability are purportedly common features of specific psychological states, such as mania and anxiety. The present study explored the independent and combinational influence of these variables upon condition-specific symptoms and affective state, as proposed by Pronin and Jacobs’ (Perspect Psychol Sci, 3:461–485, 2008) theory of mental motion. A general population sample was recruited online (N = 263). Participants completed a thought speed and variability manipulation task, inducing a combination of fast/slow and varied/repetitive thought. Change in mania and anxiety symptoms was assessed through direct self-reported symptom levels and indirect, processing bias assessment (threat interpretation). Results indicated that fast and varied thought independently increased self-reported mania symptoms. Affect was significantly less positive and more negative during slow thought. No change in anxiety symptoms or threat interpretation was found between manipulation conditions. No evidence for the proposed combinational influence of speed and variability was found. Implications and avenues for therapeutic intervention are discussed.
Resumo:
Turbo codes experience a significant decoding delay because of the iterative nature of the decoding algorithms, the high number of metric computations and the complexity added by the (de)interleaver. The extrinsic information is exchanged sequentially between two Soft-Input Soft-Output (SISO) decoders. Instead of this sequential process, a received frame can be divided into smaller windows to be processed in parallel. In this paper, a novel parallel processing methodology is proposed based on the previous parallel decoding techniques. A novel Contention-Free (CF) interleaver is proposed as part of the decoding architecture which allows using extrinsic Log-Likelihood Ratios (LLRs) immediately as a-priori LLRs to start the second half of the iterative turbo decoding. The simulation case studies performed in this paper show that our parallel decoding method can provide %80 time saving compared to the standard decoding and %30 time saving compared to the previous parallel decoding methods at the expense of 0.3 dB Bit Error Rate (BER) performance degradation.