843 resultados para Gains in selection
The effect of a twin tunnel on the propagation of ground-borne vibration from an underground railway
Resumo:
Accurate predictions of ground-borne vibration levels in the vicinity of an underground railway are greatly sought after in modern urban centres. Yet the complexity involved in simulating the underground environment means that it is necessary to make simplifying assumptions about this system. One such commonly made assumption is to ignore the effects of neighbouring tunnels, despite the fact that many underground railway lines consist of twin-bored tunnels, one for the outbound direction and one for the inbound direction. This paper presents a unique model for two tunnels embedded in a homogeneous, elastic fullspace. Each of these tunnels is subject to both known, dynamic train forces and dynamic cavity forces. The net forces acting on the tunnels are written as the sum of those tractions acting on the invert of a single tunnel, and those tractions that represent the motion induced by the neighbouring tunnel. By apportioning the tractions in this way, the vibration response of a two-tunnel system is written as a linear combination of displacement fields produced by a single-tunnel system. Using Fourier decomposition, forces are partitioned into symmetric and antisymmetric modenumber components to minimise computation times. The significance of the interactions between two tunnels is quantified by calculating the insertion gains, in both the vertical and horizontal directions, that result from the existence of a second tunnel. The insertion-gain results are shown to be localised and highly dependent on frequency, tunnel orientation and tunnel thickness. At some locations, the magnitude of these insertion gains is greater than 20 dB. This demonstrates that a high degree of inaccuracy exists in any surface vibration prediction model that includes only one of the two tunnels. This novel two-tunnel solution represents a significant contribution to the existing body of research into vibration from underground railways, as it shows that the second tunnel has a significant influence on the accuracy of vibration predictions for underground railways. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
State and regional policies, such as low carbon fuel standards (LCFSs), increasingly mandate that transportation fuels be examined according to their greenhouse gas (GHG) emissions. We investigate whether such policies benefit from determining fuel carbon intensities (FCIs) locally to account for variations in fuel production and to stimulate improvements in FCI. In this study, we examine the FCI of transportation fuels on a lifecycle basis within a specific state, Minnesota, and compare the results to FCIs using national averages. Using data compiled from 18 refineries over an 11-year period, we find that ethanol production is highly variable, resulting in a 42% difference between carbon intensities. Historical data suggests that lower FCIs are possible through incremental improvements in refining efficiency and the use of biomass for processing heat. Stochastic modeling of the corn ethanol FCI shows that gains in certainty due to knowledge of specific refinery inputs are overwhelmed by uncertainty in parameters external to the refiner, including impacts of fertilization and land use change. The LCA results are incorporated into multiple policy scenarios to demonstrate the effect of policy configurations on the use of alternative fuels. These results provide a contrast between volumetric mandates and LCFSs. © 2011 Elsevier Ltd.
Resumo:
Optimisation of cooling systems within gas turbine engines is of great interest to engine manufacturers seeking gains in performance, efficiency and component life. The effectiveness of coolant delivery is governed by complex flows within the stator wells and the interaction of main annulus and cooling air in the vicinity of the rim seals. This paper reports the development of a test facility which allows the interaction of cooling air and main gas paths to be measured at conditions representative of those found in modern gas turbine engines. The test facility features a two stage turbine with an overall pressure ratio of approximately 2.6:1. Hot air is supplied to the main annulus using a Rolls-Royce Dart compressor driven by an aero-derivative engine plant. Cooling air can be delivered to the stator wells at multiple locations and at a range of flow rates which cover bulk ingestion through to bulk egress. The facility has been designed with adaptable geometry to enable rapid changes of cooling air path configuration. The coolant delivery system allows swift and accurate changes to the flow settings such that thermal transients may be performed. Particular attention has been focused on obtaining high accuracy data, using a radio telemetry system, as well as thorough through-calibration practices. Temperature measurements can now be made on both rotating and stationary discs with a long term uncertainty in the region of 0.3 K. A gas concentration measurement system has also been developed to obtain direct measurement of re-ingestion and rim seal exchange flows. High resolution displacement sensors have been installed in order to measure hot running geometry. This paper documents the commissioning of a test facility which is unique in terms of rapid configuration changes, non-dimensional engine matching and the instrumentation density and resolution. Example data for each of the measurement systems is presented. This includes the effect of coolant flow rate on the metal temperatures within the upstream cavity of the turbine stator well, the axial displacement of the rotor assembly during a commissioning test, and the effect of coolant flow rate on mixing in the downstream cavity of the stator well. Copyright © 2010 by ASME.
Resumo:
Atmospheric effects can significantly degrade the reliability of free-space optical communications. One such effect is scintillation, caused by atmospheric turbulence, refers to random fluctuations in the irradiance and phase of the received laser beam. In this paper we inv stigate the use of multiple lasers and multiple apertures to mitigate scintillation. Since the scintillation process is slow, we adopt a block fading channel model and study the outage probability under the assumptions of orthogonal pulse-position modulation and non-ideal photodetection. Assuming perfect receiver channel state information (CSI), we derive the signal-to-noise ratio (SNR) exponents for the cases when the scintillation is lognormal, exponential and gammagamma distributed, which cover a wide range of atmospheric turbulence conditions. Furthermore, when CSI is also available at the transmitter, we illustrate very large gains in SNR are possible (in some cases larger than 15 dB) by adapting the transmitted power. Under a long-term power constraint, we outline fundamental design criteria via a simple expression that relates the required number of lasers and apertures for a given code rate and number of codeword blocks to completely remove system outages. Copyright © 2009 IEEE.
Resumo:
An atrium is a central feature of many modern naturally ventilated building designs. The atrium fills with warm air from the adjoining storeys: this air may be further warmed by direct solar heating in the atrium, and the deep warm layer enhances the flow. In this paper we focus on the degree of flow enhancement achieved by an atrium which is itself 'ventilated' directly, by a low-level connection to the exterior. A theoretical model is developed to predict the steady stack-driven displacement flow and thermal stratification in the building, due to heat gains in the storey and solar gains in the atrium, and compared with the results of laboratory experiments. Direct ventilation of the atrium is detrimental to the ventilation of the storey and the best design is identified as a compromise that provides adequate ventilation of both spaces. We identify extremes of design for which an atrium provides no significant enhancement of the flow, and show that an atrium only enhances the flow in the storey if its upper opening is of an intermediate size, and its lower opening is sufficiently small. © 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
We live in an era of abundant data. This has necessitated the development of new and innovative statistical algorithms to get the most from experimental data. For example, faster algorithms make practical the analysis of larger genomic data sets, allowing us to extend the utility of cutting-edge statistical methods. We present a randomised algorithm that accelerates the clustering of time series data using the Bayesian Hierarchical Clustering (BHC) statistical method. BHC is a general method for clustering any discretely sampled time series data. In this paper we focus on a particular application to microarray gene expression data. We define and analyse the randomised algorithm, before presenting results on both synthetic and real biological data sets. We show that the randomised algorithm leads to substantial gains in speed with minimal loss in clustering quality. The randomised time series BHC algorithm is available as part of the R package BHC, which is available for download from Bioconductor (version 2.10 and above) via http://bioconductor.org/packages/2.10/bioc/html/BHC.html. We have also made available a set of R scripts which can be used to reproduce the analyses carried out in this paper. These are available from the following URL. https://sites.google.com/site/randomisedbhc/.
Resumo:
We use polarization-resolved and temperature-dependent photoluminescence of single zincblende (ZB) (cubic) and wurtzite (WZ) (hexagonal) InP nanowires to probe differences in selection rules and bandgaps between these two semiconductor nanostructures. The WZ nanowires exhibit a bandgap 80 meV higher in energy than the ZB nanowires. The temperature dependence of the PL is similar but not identical for the WZ and ZB nanowires. We find that ZB nanowires exhibit strong polarization parallel to the nanowire axis, while the WZ nanowires exhibit polarized emission perpendicular to the nanowire axis. This behavior is interpreted in terms of the different selection rules for WZ and ZB crystal structures. © 2007 American Institute of Physics.
Resumo:
We propose a novel information-theoretic approach for Bayesian optimization called Predictive Entropy Search (PES). At each iteration, PES selects the next evaluation point that maximizes the expected information gained with respect to the global maximum. PES codifies this intractable acquisition function in terms of the expected reduction in the differential entropy of the predictive distribution. This reformulation allows PES to obtain approximations that are both more accurate and efficient than other alternatives such as Entropy Search (ES). Furthermore, PES can easily perform a fully Bayesian treatment of the model hyperparameters while ES cannot. We evaluate PES in both synthetic and real-world applications, including optimization problems in machine learning, finance, biotechnology, and robotics. We show that the increased accuracy of PES leads to significant gains in optimization performance.
Resumo:
This thesis presents the design, construction, control and evaluation of a novel force controlled actuator. Traditional force controlled actuators are designed from the premise that "Stiffer is better''. This approach gives a high bandwidth system, prone to problems of contact instability, noise, and low power density. The actuator presented in this thesis is designed from the premise that "Stiffness isn't everything". The actuator, which incorporates a series elastic element, trades off achievable bandwidth for gains in stable, low noise force control, and protection against shock loads. This thesis reviews related work in robot force control, presents theoretical descriptions of the control and expected performance from a series elastic actuator, and describes the design of a test actuator constructed to gather performance data. Finally the performance of the system is evaluated by comparing the performance data to theoretical predictions.
Resumo:
Dissertação apresentada à Universidade Fernando Pessoa como partes dos requisitos para a obtenção do grau de Mestre em Engenharia Informática, ramo de Sistemas de Informação e Multimédia
Resumo:
Various concurrency control algorithms differ in the time when conflicts are detected, and in the way they are resolved. In that respect, the Pessimistic and Optimistic Concurrency Control (PCC and OCC) alternatives represent two extremes. PCC locking protocols detect conflicts as soon as they occur and resolve them using blocking. OCC protocols detect conflicts at transaction commit time and resolve them using rollbacks (restarts). For real-time databases, blockages and rollbacks are hazards that increase the likelihood of transactions missing their deadlines. We propose a Speculative Concurrency Control (SCC) technique that minimizes the impact of blockages and rollbacks. SCC relies on the use of added system resources to speculate on potential serialization orders and to ensure that if such serialization orders materialize, the hazards of blockages and roll-backs are minimized. We present a number of SCC-based algorithms that differ in the level of speculation they introduce, and the amount of system resources (mainly memory) they require. We show the performance gains (in terms of number of satisfied timing constraints) to be expected when a representative SCC algorithm (SCC-2S) is adopted.
Resumo:
The past two decades have seen substantial gains in our understanding of the complex processes underlying disturbed brain-gut communication in disorders such as irritable bowel syndrome (IBS) and inflammatory bowel disease (IBD). Despite a growing understanding of the neurobiology of brain-gut axis dysfunction, there is a relative paucity of investigations into how the various factors involved in dysregulating the brain-gut axis, including stress, immune activation and pain, could impact on fundamental brain processes such as cognitive performance. To this end, we proposed a cognitive neurobiology of brain-gut axis dysfunction and took a novel approach to examine how disturbed brain-gut interactions may manifest as altered cognitive performance in IBS and IBD, both cross-sectionally and prospectively. We have demonstrated that, disorders of the brain-gut axis are characterised by stable deficits in specific cognitive domains. Specifically, patients with IBS exhibit a consistent hippocampal mediated visuospatial memory impairment. In addition we have found evidence to suggest a similar visuospatial impairment in IBD. However, our most consistent finding within this population was that patients with Crohn’s disease exhibit impaired selective attention/ response inhibition on the classic Stroop interference test. These cognitive deficits may serve to perpetuate and sustain brain-gut axis dysfunction. Furthermore, this research has shed light on some of the underlying neurobiological mechanisms that may be mediating cognitive dysfunction in IBS. Our findings may have significant implications for the individual who suffers from a brain-gut axis disorder and may also inform future treatment strategies. Taken together, these findings can be incorporated into existing neurobiological models of brain-gut axis dysfunction, to develop a more comprehensive model accounting for the cognitive-neurobiology of brain-gut axis disorders. This has furthered our understanding of disease pathophysiology and may ultimately aid in both the diagnosis and treatment of these highly prevalent, but poorly understood disorders.
Resumo:
This longitudinal study tracked third-level French (n=10) and Chinese (n=7) learners of English as a second language (L2) during an eight-month study abroad (SA) period at an Irish university. The investigation sought to determine whether there was a significant relationship between length of stay (LoS) abroad and gains in the learners' oral complexity, accuracy and fluency (CAF), what the relationship was between these three language constructs and whether the two learner groups would experience similar paths to development. Additionally, the study also investigated whether specific reported out-of-class contact with the L2 was implicated in oral CAF gains. Oral data were collected at three equidistant time points; at the beginning of SA (T1), midway through the SA sojourn (T2) and at the end (T3), allowing for a comparison of CAF gains arising during one semester abroad to those arising during a subsequent semester. Data were collected using Sociolinguistic Interviews (Labov, 1984) and adapted versions of the Language Contact Profile (Freed et al., 2004). Overall, the results point to LoS abroad as a highly influential variable in gains to be expected in oral CAF during SA. While one semester in the TL country was not enough to foster statistically significant improvement in any of the CAF measures employed, significant improvement was found during the second semester of SA. Significant differences were also revealed between the two learner groups. Finally, significant correlations, some positive, some negative, were found between gains in CAF and specific usage of the L2. All in all, the disaggregation of the group data clearly illustrates, in line with other recent enquiries (e.g. Wright and Cong, 2014) that each individual learner's path to CAF development was unique and highly individualised, thus providing strong evidence for the recent claim that SLA is "an individualized nonlinear endeavor" (Polat and Kim, 2014: 186).
Resumo:
This paper briefly describes an interactive parallelisation toolkit that can be used to generate parallel code suitable for either a distributed memory system (using message passing) or a shared memory system (using OpenMP). This study focuses on how the toolkit is used to parallelise a complex heterogeneous ocean modelling code within a few hours for use on a shared memory parallel system. The generated parallel code is essentially the serial code with OpenMP directives added to express the parallelism. The results show that substantial gains in performance can be achieved over the single thread version with very little effort.
Resumo:
This paper describes an interactive parallelisation toolkit that can be used to generate parallel code suitable for either a distributed memory system (using message passing) or a shared memory system (using OpenMP). This study focuses on how the toolkit is used to parallelise a complex heterogeneous ocean modelling code within a few hours for use on a shared memory parallel system. The generated parallel code is essentially the serial code with OpenMP directives added to express the parallelism. The results show that substantial gains in performance can be achieved over the single thread version with very little effort.