946 resultados para Available transfer capacity
Resumo:
DUE TO COPYRIGHT RESTRICTIONS, ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY WITH PRIOR ARRANGEMENT
Resumo:
The integrability of the nonlinear Schräodinger equation (NLSE) by the inverse scattering transform shown in a seminal work [1] gave an interesting opportunity to treat the corresponding nonlinear channel similar to a linear one by using the nonlinear Fourier transform. Integrability of the NLSE is in the background of the old idea of eigenvalue communications [2] that was resurrected in recent works [3{7]. In [6, 7] the new method for the coherent optical transmission employing the continuous nonlinear spectral data | nonlinear inverse synthesis was introduced. It assumes the modulation and detection of data using directly the continuous part of nonlinear spectrum associated with an integrable transmission channel (the NLSE in the case considered). Although such a transmission method is inherently free from nonlinear impairments, the noisy signal corruptions, arising due to the ampli¯er spontaneous emission, inevitably degrade the optical system performance. We study properties of the noise-corrupted channel model in the nonlinear spectral domain attributed to NLSE. We derive the general stochastic equations governing the signal evolution inside the nonlinear spectral domain and elucidate the properties of the emerging nonlinear spectral noise using well-established methods of perturbation theory based on inverse scattering transform [8]. It is shown that in the presence of small noise the communication channel in the nonlinear domain is the additive Gaussian channel with memory and signal-dependent correlation matrix. We demonstrate that the effective spectral noise acquires colouring", its autocorrelation function becomes slow decaying and non-diagonal as a function of \frequencies", and the noise loses its circular symmetry, becoming elliptically polarized. Then we derive a low bound for the spectral effiency for such a channel. Our main result is that by using the nonlinear spectral techniques one can significantly increase the achievable spectral effiency compared to the currently available methods [9]. REFERENCES 1. Zakharov, V. E. and A. B. Shabat, Sov. Phys. JETP, Vol. 34, 62{69, 1972. 2. Hasegawa, A. and T. Nyu, J. Lightwave Technol., Vol. 11, 395{399, 1993. 3. Yousefi, M. I. and F. R. Kschischang, IEEE Trans. Inf. Theory, Vol. 60, 4312{4328, 2014. 4. Yousefi, M. I. and F. R. Kschischang, IEEE Trans. Inf. Theory, Vol. 60, 4329{4345 2014. 5. Yousefi, M. I. and F. R. Kschischang, IEEE Trans. Inf. Theory, Vol. 60, 4346{4369, 2014. 6. Prilepsky, J. E., S. A. Derevyanko, K. J. Blow, I. Gabitov, and S. K. Turitsyn, Phys. Rev. Lett., Vol. 113, 013901, 2014. 7. Le, S. T., J. E. Prilepsky, and S. K. Turitsyn, Opt. Express, Vol. 22, 26720{26741, 2014. 8. Kaup, D. J. and A. C. Newell, Proc. R. Soc. Lond. A, Vol. 361, 413{446, 1978. 9. Essiambre, R.-J., G. Kramer, P. J. Winzer, G. J. Foschini, and B. Goebel, J. Lightwave Technol., Vol. 28, 662{701, 2010.
Resumo:
Presently monoethanolamine (MEA) remains the industrial standard solvent for CO2 capture processes. Operating issues relating to corrosion and degradation of MEA at high temperatures and concentrations, and in the presence of oxygen, in a traditional PCC process, have introduced the requisite for higher quality and costly stainless steels in the construction of capture equipment and the use of oxygen scavengers and corrosion inhibitors. While capture processes employing MEA have improved significantly in recent times there is a continued attraction towards alternative solvents systems which offer even more improvements. This movement includes aqueous amine blends which are gaining momentum as new generation solvents for CO2 capture processes. Given the exhaustive array of amines available to date endless opportunities exist to tune and tailor a solvent to deliver specific performance and physical properties in line with a desired capture process. The current work is focussed on the rationalisation of CO2 absorption behaviour in a series of aqueous amine blends incorporating monoethanolamine, N,N-dimethylethanolamine (DMEA), N,N-diethylethanolamine (DEEA) and 2-amino-2-methyl-1-propanol (AMP) as solvent components. Mass transfer/kinetic measurements have been performed using a wetted wall column (WWC) contactor at 40°C for a series of blends in which the blend properties including amine concentration, blend ratio, and CO2 loadings from 0.0-0.4 (moles CO2/total moles amine) were systematically varied and assessed. Equilibrium CO2 solubility in each of the blends has been estimated using a software tool developed in Matlab for the prediction of vapour liquid equilibrium using a combination of the known chemical equilibrium reactions and constants for the individual amine components which have been combined into a blend.From the CO2 mass transfer data the largest absorption rates were observed in blends containing 3M MEA/3M Am2 while the selection of the Am2 component had only a marginal impact on mass transfer rates. Overall, CO2 mass transfer in the fastest blends containing 3M MEA/3M Am2 was found to be only slightly lower than a 5M MEA solution at similar temperatures and CO2 loadings. In terms of equilibrium behaviour a slight decrease in the absorption capacity (moles CO2/mole amine) with increasing Am2 concentration in the blends with MEA was observed while cyclic capacity followed the opposite trend. Significant increases in cyclic capacity (26-111%) were observed in all blends when compared to MEA solutions at similar temperatures and total amine concentrations. In view of the reasonable compromise between CO2 absorption rate and capacity a blend containing 3M MEA and 3M AMP as blend components would represent a reasonable alternative in replacement of 5M MEA as a standalone solvent.
Resumo:
Digital systems can generate left and right audio channels that create the effect of virtual sound source placement (spatialization) by processing an audio signal through pairs of Head-Related Transfer Functions (HRTFs) or, equivalently, Head-Related Impulse Responses (HRIRs). The spatialization effect is better when individually-measured HRTFs or HRIRs are used than when generic ones (e.g., from a mannequin) are used. However, the measurement process is not available to the majority of users. There is ongoing interest to find mechanisms to customize HRTFs or HRIRs to a specific user, in order to achieve an improved spatialization effect for that subject. Unfortunately, the current models used for HRTFs and HRIRs contain over a hundred parameters and none of those parameters can be easily related to the characteristics of the subject. This dissertation proposes an alternative model for the representation of HRTFs, which contains at most 30 parameters, all of which have a defined functional significance. It also presents methods to obtain the value of parameters in the model to make it approximately equivalent to an individually-measured HRTF. This conversion is achieved by the systematic deconstruction of HRIR sequences through an augmented version of the Hankel Total Least Squares (HTLS) decomposition approach. An average 95% match (fit) was observed between the original HRIRs and those re-constructed from the Damped and Delayed Sinusoids (DDSs) found by the decomposition process, for ipsilateral source locations. The dissertation also introduces and evaluates an HRIR customization procedure, based on a multilinear model implemented through a 3-mode tensor, for mapping of anatomical data from the subjects to the HRIR sequences at different sound source locations. This model uses the Higher-Order Singular Value Decomposition (HOSVD) method to represent the HRIRs and is capable of generating customized HRIRs from easily attainable anatomical measurements of a new intended user of the system. Listening tests were performed to compare the spatialization performance of customized, generic and individually-measured HRIRs when they are used for synthesized spatial audio. Statistical analysis of the results confirms that the type of HRIRs used for spatialization is a significant factor in the spatialization success, with the customized HRIRs yielding better results than generic HRIRs.
Resumo:
Since the 1990s, scholars have paid special attention to public management’s role in theory and research under the assumption that effective management is one of the primary means for achieving superior performance. To some extent, this was influenced by popular business writings of the 1980s as well as the reinventing literature of the 1990s. A number of case studies but limited quantitative research papers have been published showing that management matters in the performance of public organizations. ^ My study examined whether or not management capacity increased organizational performance using quantitative techniques. The specific research problem analyzed was whether significant differences existed between high and average performing public housing agencies on select criteria identified in the Government Performance Project (GPP) management capacity model, and whether this model could predict outcome performance measures in a statistically significant manner, while controlling for exogenous influences. My model included two of four GPP management subsystems (human resources and information technology), integration and alignment of subsystems, and an overall managing for results framework. It also included environmental and client control variables that were hypothesized to affect performance independent of management action. ^ Descriptive results of survey responses showed high performing agencies with better scores on most high performance dimensions of individual criteria, suggesting support for the model; however, quantitative analysis found limited statistically significant differences between high and average performers and limited predictive power of the model. My analysis led to the following major conclusions: past performance was the strongest predictor of present performance; high unionization hurt performance; and budget related criterion mattered more for high performance than other model factors. As to the specific research question, management capacity may be necessary but it is not sufficient to increase performance. ^ The research suggested managers may benefit by implementing best practices identified through the GPP model. The usefulness of the model could be improved by adding direct service delivery to the model, which may also improve its predictive power. Finally, there are abundant tested concepts and tools designed to improve system performance that are available for practitioners designed to improve management subsystem support of direct service delivery.^
Resumo:
A high resolution study of the H(e,e'K+)Λ,Σ 0 reaction was performed at Hall A, TJNAF as part of the hypernuclear experiment E94-107. One important ingredient to the measurement of the hypernuclear cross section is the elementary cross section for production of hyperons, Λ and Σ0. This reaction was studied using a hydrogen (i.e. a proton) target. Data were taken at very low Q2 (∼0.07 (GeV/c) 2) and W∼2.2 GeV. Kaons were detected along the direction of q, the momentum transferred by the incident electron (&thetas;CM∼6°). In addition, there are few data available regarding electroproduction of hyperons at low Q2 and &thetas;CM and the available theoretical models differ significantly in this kinematical region of W. The measurement of the elementary cross section was performed by scaling the Monte Carlo cross section (MCEEP) with the experimental-to-simulated yield ratio. The Monte Carlo cross section includes an experimental fit and extrapolation from the existing data for electroproduction of hyperons. Moreover, the estimated transverse component of the electroproduction cross section of H(e,e'K+)Λ was compared to the different predictions of the theoretical models and exisiting data curves for photoproductions of hyperons. None of the models fully describe the cross-section results over the entire angular range. Furthermore, measurements of the Σ 0/Λ production ratio were performed at &thetas; CM∼6°, where data are not available. Finally, data for the measurements of the differential cross sections and the Σ 0/Λ production were binned in Q2, W and &thetas;CM to understand the dependence on these variables. These results are not only a fundamental contribution to the hypernuclear spectroscopy studies but also an important experimental measurement to constrain existing theoretical models for the elementary reaction.
Resumo:
This research focuses on developing a capacity planning methodology for the emerging concurrent engineer-to-order (ETO) operations. The primary focus is placed on the capacity planning at sales stage. This study examines the characteristics of capacity planning in a concurrent ETO operation environment, models the problem analytically, and proposes a practical capacity planning methodology for concurrent ETO operations in the industry. A computer program that mimics a concurrent ETO operation environment was written to validate the proposed methodology and test a set of rules that affect the performance of a concurrent ETO operation. ^ This study takes a systems engineering approach to the problem and employs systems engineering concepts and tools for the modeling and analysis of the problem, as well as for developing a practical solution to this problem. This study depicts a concurrent ETO environment in which capacity is planned. The capacity planning problem is modeled into a mixed integer program and then solved for smaller-sized applications to evaluate its validity and solution complexity. The objective is to select the best set of available jobs to maximize the profit, while having sufficient capacity to meet each due date expectation. ^ The nature of capacity planning for concurrent ETO operations is different from other operation modes. The search for an effective solution to this problem has been an emerging research field. This study characterizes the problem of capacity planning and proposes a solution approach to the problem. This mathematical model relates work requirements to capacity over the planning horizon. The methodology is proposed for solving industry-scale problems. Along with the capacity planning methodology, a set of heuristic rules was evaluated for improving concurrent ETO planning. ^
Resumo:
A high resolution study of the H(e,e'K+)Λ,Σ0 reaction was performed at Hall A, TJNAF as part of the hypernuclear experiment E94-107. One important ingredient to the measurement of the hypernuclear cross section is the elementary cross section for production of hyperons, Λ and Σ0. This reaction was studied using a hydrogen (i.e. a proton) target. Data were taken at very low Q2 (∼0.07 (GeV/c)2) and W∼2.2 GeV. Kaons were detected along the direction of q, the momentum transferred by the incident electron (θCM~6°). In addition, there are few data available regarding electroproduction of hyperons at low Q2 and θCM, and the available theoretical models differ significantly in this kinematical region of W. The measurement of the elementary cross section was performed by scaling the Monte Carlo cross section (MCEEP) with the experimental-to-simulated yield ratio. The Monte Carlo cross section includes an experimental fit and extrapolation from the existing data for electroproduction of hyperons. Moreover, the estimated transverse component of the electroproduction cross section of H(e,e'K+)Λ was compared to the different predictions of the theoretical models and exisiting data curves for photoproductions of hyperons. None of the models fully describe the cross-section results over the entire angular range. Furthermore, measurements of the Σ0/Λ production ratio were performed at θCM, where data are not available. Finally, data for the measurements of the differential cross sections and the Σ0/Λ production were binned in Q2, W and θCM to understand the dependence on these variables. These results are not only a fundamental contribution to the hypernuclear spectroscopy studies but also an important experimental measurement to constrain existing theoretical models for the elementary reaction.
Resumo:
Implicit in current design practice of minimum uplift capacity, is the assumption that the connection's capacity is proportional to the number of fasteners per connection joint. This assumption may overestimate the capacity of joints by a factor of two or more and maybe the cause of connection failures in extreme wind events. The current research serves to modify the current practice by proposing a realistic relationship between the number of fasteners and the capacity of the joint. The research is also aimed at further development of non-intrusive continuous load path (CLP) connection system using Glass Fiber Reinforced Polymer (GFRP) and epoxy. Suitable designs were developed for stud to top plate and gable end connections and tests were performed to evaluate the ultimate load, creep and fatigue behavior. The objective was to determine the performance of the connections under simulated sustained hurricane conditions. The performance of the new connections was satisfactory.
Resumo:
While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.
In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.
By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.
Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.
Resumo:
Results of experimental studies of ion exchange properties of manganese and iron minerals in micronodules from diverse bioproductive zones of the World Ocean were considered. It was found that sorption behavior of these minerals was similar to that of ore minerals from ferromanganese nodules and low-temperature hydrothermal crusts. The exchange complex of minerals in the micronodules includes the major (Na**+, K**+, Ca**2+, Mg**2+, and Mn**2+) and subordinate (Ni**2+, Cu**2+, Co**2+, Pb**2+, and others) cations. Reactivity of theses cations increases from Pb**2+ and Co**2+ to Na**+ and Ca**2+. Exchange capacity of micronodule minerals increases from alkali to heavy metal cations. Capacity of iron and manganese minerals in oceanic micronodules increases in the following series: goethite < goethite + birnessite < todorokite + asbolane-buserite + birnessite < asbolane-buserite + birnessite < birnessite + asbolane-buserite < birnessite + vernadite ~= Fe-vernadite + Mn-feroxyhyte. Obtained data supplement available information on ion exchange properties of oceanic ferromanganese sediments and refine the role of sorption processes in redistribution of metal cations at the bottom water - sediment interface during micronodule formation and growth.
Resumo:
In Canada, increases in rural development has led to a growing need to effectively manage the resulting municipal and city sewage without the addition of significant cost- and energy- expending infrastructure. Storring Septic Service Limited is a family-owned, licensed wastewater treatment facility located in eastern Ontario. It makes use of a passive waste stabilization pond system to treat and dispose of waste and wastewater in an environmentally responsible manner. Storring Septic, like many other similar small-scale wastewater treatment facilities across Canada, has the potential to act as a sustainable eco-engineered facility that municipalities and service providers could utilize to manage and dispose of their wastewater. However, it is of concern that the substantial inclusion of third party material could be detrimental to the stability and robustness of the pond system. In order to augment the capacity of the current facility, and ensure it remains a self-sustaining system with the capacity to safely accept septage from other sewage haulers, it was hypothesized that pond effluent treatment could be further enhanced through the incorporation of one of three different technology solutions, which would allow the reduction of wastewater quality parameters below existing regulatory effluent discharge limits put in place by Ontario’s Ministry of the Environment and Climate Change (MOECC). Two of these solutions make use of biofilm technologies in order to enhance the removal of wastewater parameters of interest, and the third utilizes the natural water filtration capabilities of zebra mussels. Pilot-scale testing investigated the effects of each of these technologies on treatment performance under both cold and warm weather operation. This research aimed to understand the important mechanisms behind biological filtration methods in order to choose and optimize the best treatment strategy for full-scale testing and implementation. In doing so, a recommendation matrix was elaborated provided with the potential to be used as a universal operational strategy for wastewater treatment facilities located in environments of similar climate and ecology.
Resumo:
Background: Individuals with chronic obstructive pulmonary disease (COPD) have higher than normal ventilatory equivalents for carbon dioxide (VE/VCO2) during exercise. There is growing evidence that emphysema on thoracic computed tomography (CT) scans is associated with poor exercise capacity in COPD patients with only mild-to-moderate airflow obstruction. We hypothesized that emphysema is an underlying cause of microvascular dysfunction and ventilatory inefficiency, which in turn contributes to reduced exercise capacity. We expected ventilatory inefficiency to be associated with a) the extent of emphysema; b) lower diffusing capacity for carbon monoxide; c) a reduced pulmonary blood flow response to exercise; and d) reduced exercise capacity. Methods: In a cross-sectional study, 19 subjects with mild-to-moderate COPD (mean ± SD FEV1= 82 ± 13% predicted, 12 GOLD grade 1) and 26 age-, sex-, and activity-matched controls underwent a ramp-incremental symptom-limited exercise test on a cycle ergometer. Ventilatory inefficiency was assessed by the minimum VE/VCO2 value (nadir). A subset of subjects also completed repeated constant work rate exercise bouts with non-invasive measurements of pulmonary blood flow. Emphysema was quantified as the percentage of attenuation areas below -950 Housefield Units on CT scans. An electronic scoresheet was used to keep track of emphysema sub-types. Results: COPD subjects typically had centrilobular emphysema (76.8 ± 10.1% of total emphysema) in the upper lobes (upper/lower lobe ratio= 0.82 ± 0.04). They had lower peak oxygen uptake (VO2), higher VE/VCO2 nadir and greater dyspnea scores than controls (p<0.05). Lower peak O2 and worse dyspnea were found in COPD subjects with VE/VCO2 nadirs ≥ 30. COPD subjects had blunted increases in pulmonary blood flow from rest to iso-VO2 exercise (p<0.05). Higher VE/VCO2 nadir in COPD subjects correlated with emphysema severity (r= 0.63), which in turn correlated with reduced lung diffusing capacity (r= -0.72) and blunted changes in pulmonary blood flow from rest to exercise (r= -0.69) (p<0.01). Conclusions: Ventilation “wasted” in emphysematous areas is associated with reduced exercise ventilatory efficiency in mild-to-moderate COPD. Exercise ventilatory inefficiency links structure (emphysema) and function (gas transfer) to a key clinical outcome (reduced exercise capacity) in COPD patients with modest spirometric abnormalities.
Resumo:
The heat transfer from a hot primary flow stream passing over the outside of an airfoil shaped strut to a cool secondary flow stream passing through the inside of that strut was studied experimentally and numerically. The results showed that the heat transfer on the inside of the strut could be reliably modeled as a developing flow and described using a power law model. The heat transfer on the outside of the strut was complicated by flow separation and stall on the suction side of the strut at high angles of attack. This separation was quite sensitive to the condition of the turbulence in the flow passing over the strut, with the size of the separated wake changing significantly as the mean magnitude and levels of anisotropy were varied. The point of first stall moved by as much as 15% of the chord, while average heat transfer levels changed by 2-5% as the inlet condition was varied. This dependence on inlet conditions meant that comparisons between experiment and steady RANS based CFD were quite poor. Differences between the CFD and experiment were attributed to anisotropic and unsteady effects. The coupling between the two flows was shown to be quite low - that is to say, heat transfer coefficients on both the inner and outer surfaces of the strut were relatively unaffected by the temperature of the strut, and it was possible to predict the temperature on the strut surface quite reliably using heat transfer data from decoupled tests, especially for CFD simulations.
Resumo:
This paper develops a simple model of the post-secondary education system in Canada that provides a useful basis for thinking about issues of capacity and access. It uses a supply-demand framework, where demand comes on the part of individuals wanting places in the system, and supply is determined not only by various directives and agreements between educational ministries and institutions (and other factors), but also the money available to universities and colleges through tuition fees. The supply and demand curves are then put together with a stylised tuition-setting rule to describe the “market” of post-secondary schooling. This market determines the number of students in the system, and their characteristics, especially as they relate to “ability” and family background, the latter being especially relevant to access issues. The manner in which various changes in the system – including tuition fees, student financial aid, government support for institutions, and the returns to schooling – are then discussed in terms of how they affect the number of students and their characteristics, or capacity and access.