121 resultados para RESIDENCE TIME DISTRIBUTION
Resumo:
It is accepted that the efficiency of sugar cane clarification is closely linked with sugar juice composition (including suspended or insoluble impurities), the inorganic phosphate content, the liming condition and type, and the interactions between the juice components. These interactions are not well understood, particularly those between calcium, phosphate, and sucrose in sugar cane juice. Studies have been conducted on calcium oxide (CaO)/phosphate/sucrose systems in both synthetic and factory juices to provide further information on the defecation process (i.e., simple liming to effect impurity removal) and to identify an effective clarification process that would result in reduced scaling of sugar factory evaporators, pans, and centrifugals. Results have shown that a two-stage process involving the addition of lime saccharate to a set juice pH followed by the addition of sodium hydroxide to a final juice pH or a similar two-stage process where the order of addition of the alkalis is reversed prior to clarification reduces the impurity loading of the clarified juice compared to that of the clarified juice obtained by the conventional defecation process. The treatment process showed reductions in CaO (27% to 50%) and MgO (up to 20%) in clarified juices with no apparent loss in juice clarity or increase in residence time of the mud particles compared to those in the conventional process. There was also a reduction in the SiO2 content. However, the disadvantage of this process is the significant increase in the Na2O content.
Resumo:
In this study available solid tire wastes in Bangladesh were characterized through proximate and ultimate analyses, gross calorific values and thermogravimetric analysis to investigate their suitability as feedstock for thermal recycling by pyrolysis technology. A new approach in heating system, fixedbed fire-tube heating pyrolysis reactor has been designed and fabricated for the recovery of liquid hydrocarbons from solid tire wastes. The tire wastes were pyrolysed in the internally heated fixed-bed fire-tube heating reactor and maximum liquid yield of 46-55 wt% of solid tire waste was obtained at a temperature of 475 oC, feed size 4 cm3, with a residence time of 5 s under N2 atmosphere. The liquid products were characterized by physical properties, elemental analysis, FT-IR, 1H-NMR, GC MS techniques and distillation. The results show that the liquid products are comparable to petroleum fuels whereas fractional distillations and desulphurization are essential to be used as alternative for diesel engine fuels.
Resumo:
Crashes that occur on motorways contribute to a significant proportion (40-50%) of non-recurrent motorway congestions. Hence, reducing the frequency of crashes assists in addressing congestion issues (Meyer, 2008). Crash likelihood estimation studies commonly focus on traffic conditions in a short time window around the time of a crash while longer-term pre-crash traffic flow trends are neglected. In this paper we will show, through data mining techniques that a relationship between pre-crash traffic flow patterns and crash occurrence on motorways exists. We will compare them with normal traffic trends and show this knowledge has the potential to improve the accuracy of existing models and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with crashes corresponding to traffic flow data using an incident detection algorithm. Traffic trends (traffic speed time series) revealed that crashes can be clustered with regards to the dominant traffic patterns prior to the crash. Using the K-Means clustering method with Euclidean distance function allowed the crashes to be clustered. Then, normal situation data was extracted based on the time distribution of crashes and were clustered to compare with the “high risk” clusters. Five major trends have been found in the clustering results for both high risk and normal conditions. The study discovered traffic regimes had differences in the speed trends. Based on these findings, crash likelihood estimation models can be fine-tuned based on the monitored traffic conditions with a sliding window of 30 minutes to increase accuracy of the results and minimize false alarms.
Resumo:
ROBERT EVAPORATORS in Australian sugar factories are traditionally constructed with 44.45 mm outside diameter stainless steel tubes of ~2 m length for all stages of evaporation. There are a few vessels with longer tubes (up to 2.8 m) and smaller and larger diameters (38.1 and 50.8 mm). Queensland University of Technology is undertaking a study to investigate the heat transfer performance of tubes of different lengths and diameters for the whole range of process conditions typically encountered in the evaporator set. Incorporation of these results into practical evaporator designs requires an understanding of the cost implications for constructing evaporator vessels with calandrias having tubes of different dimensions. Cost savings are expected for tubes of smaller diameter and longer length in terms of material, labour and installation costs in the factory. However these savings must be considered in terms of the heat transfer area requirements for the evaporation duty, which will likely be a function of the tube dimensions. In this paper a capital cost model is described which provides a relative cost of constructing and installing Robert evaporators of the same heating surface area but with different tube dimensions. Evaporators of 2000, 3000, 4000 and 5000 m2 are investigated. This model will be used in conjunction with the heat transfer efficiency data (when available) to determine the optimum tube dimensions for a new evaporator at a specified evaporation duty. Consideration is also given to other factors such as juice residence time (and implications for sucrose degradation and control) and droplet de-entrainment in evaporators of different tube dimensions.
Resumo:
A theoretical model to describe the plasma-assisted growth of carbon nanofibers (CNFs) is proposed. Using the model, the plasma-related effects on the nanofiber growth parameters, such as the growth rate due to surface and bulk diffusion, the effective carbon flux to the catalyst surface, the characteristic residence time and diffusion length of carbon atoms on the catalyst surface, and the surface coverages, have been studied. The dependence of these parameters on the catalyst surface temperature and ion and etching gas fluxes to the catalyst surface is quantified. The optimum conditions under which a low-temperature plasma environment can benefit the CNF growth are formulated. These results are in good agreement with the available experimental data on CNF growth and can be used for optimizing synthesis of related nanoassemblies in low-temperature plasma-assisted nanofabrication. © 2008 American Institute of Physics.
Resumo:
The growth of single-walled carbon nanotubes (SWCNTs) in plasma-enhanced chemical vapor deposition (PECVD) is studied using a surface diffusion model. It is shown that at low substrate temperatures (≤1000 K), the atomic hydrogen and ion fluxes from the plasma can strongly affect nanotube growth. The ion-induced hydrocarbon dissociation can be the main process that supplies carbon atoms for SWCNT growth and is responsible for the frequently reported higher (compared to thermal chemical vapor deposition) nanotube growth rates in plasma-based processes. On the other hand, excessive deposition of plasma ions and atomic hydrogen can reduce the diffusion length of the carbon-bearing species and their residence time on the nanotube lateral surfaces. This reduction can adversely affect the nanotube growth rates. The results here are in good agreement with the available experimental data and can be used for optimizing SWCNT growth in PECVD.
Resumo:
The reliable response to weak biological signals requires that they be amplified with fidelity. In E. coli, the flagellar motors that control swimming can switch direction in response to very small changes in the concentration of the signaling protein CheY-P, but how this works is not well understood. A recently proposed allosteric model based on cooperative conformational spread in a ring of identical protomers seems promising as it is able to qualitatively reproduce switching, locked state behavior and Hill coefficient values measured for the rotary motor. In this paper we undertook a comprehensive simulation study to analyze the behavior of this model in detail and made predictions on three experimentally observable quantities: switch time distribution, locked state interval distribution, Hill coefficient of the switch response. We parameterized the model using experimental measurements, finding excellent agreement with published data on motor behavior. Analysis of the simulated switching dynamics revealed a mechanism for chemotactic ultrasensitivity, in which cooperativity is indispensable for realizing both coherent switching and effective amplification. These results showed how cells can combine elements of analog and digital control to produce switches that are simultaneously sensitive and reliable. © 2012 Ma et al.
Resumo:
Synthesis of high quality boron carbide (B4C) powders is achieved by carbothermal reduction of boron oxide (B2O3) from a condensed boric acid (H3BO3)/polyvinyl acetate (PVAc) product. Precursor solutions are prepared via free radical polymerisation of vinyl acetate (VA) monomer in methanol in the presence of dissolved H3BO3. A condensed product is then formed by flash evaporation under vacuum. As excess VA monomer is removed at the evaporation step, the polymerisation time is used to manage availability of carbon for reaction. This control of carbon facilitates dispersion of H3BO3 in solution due to the presence of residual VA monomer. B4C powders with very low residual carbon are formed at temperatures as low as 1,250 °C with a 4 hour residence time.
Resumo:
The 12.7-10.5 Ma Cougar Point Tuff in southern Idaho, USA, consists of 10 large-volume (>10²-10³ km³ each), high-temperature (800-1000 °C), rhyolitic ash-flow tuffs erupted from the Bruneau-Jarbidge volcanic center of the Yellowstone hotspot. These tuffs provide evidence for compositional and thermal zonation in pre-eruptive rhyolite magma, and suggest the presence of a long-lived reservoir that was tapped by numerous large explosive eruptions. Pyroxene compositions exhibit discrete compositional modes with respect to Fe and Mg that define a linear spectrum punctuated by conspicuous gaps. Airfall glass compositions also cluster into modes, and the presence of multiple modes indicates tapping of different magma volumes during early phases of eruption. Equilibrium assemblages of pigeonite and augite are used to reconstruct compositional and thermal gradients in the pre-eruptive reservoir. The recurrence of identical compositional modes and of mineral pairs equilibrated at high temperatures in successive eruptive units is consistent with the persistence of their respective liquids in the magma reservoir. Recurrence intervals of identical modes range from 0.3 to 0.9 Myr and suggest possible magma residence times of similar duration. Eruption ages, magma temperatures, Nd isotopes, and pyroxene and glass compositions are consistent with a long-lived, dynamically evolving magma reservoir that was chemically and thermally zoned and composed of multiple discrete magma volumes.
Resumo:
Purpose: Knowledge management (KM) is important to the knowledge-intensive construction industry. The diversified and changing nature of works in this field warrants us to stocktake, identify changes and map out KM research framework for future exploration. Design/methodology/approach: The study involves three aspects. First, three stages of KM research in construction were distinguished in terms of the time distribution of 217 target publications. Major topics in the stages were extracted for understanding the changes of research emphasis from evolutionary perspective. Second, the past works were summed up in a three-dimensional research framework in terms of management organization, managerial methodology and approach, and managerial objective. Finally, potential research orientations in the future were predicted to expand the existing research framework. Findings: It was found that (1) KM research has significantly blossomed in the last two decades with a great potential; (2) major topics of KM were changing in terms of technology, technique, organization, attribute of knowledge and research objectives; (3) past KM studies centred around management organization, managerial methodology and approach, and managerial objective thus a three-dimensional research framework was proposed; (4) within the research framework, team-level, project-level and firm-level KM were studied to achieve project, organizational and competitive objectives by integrated methodologies of information technology, social technique and KM process tool; and (5) nine potential research orientations were predicted corresponding to the three dimensions. Finally, an expanded research framework was proposed to encourage and guide future research works in this field. Research limitations/implications: The paper only focused on the construction industry. The findings need further exploration in order to discover any possible missing important research works which were not published in English or not included in the time period. Originality/value: The paper formed a systematic framework of KM research in construction and predicted the potential research orientations. It provides much value for the researchers who want to understand the past and the future of global KM research in the construction industry.
The Optimal Smoothing of the Wigner-Ville Distribution for Real-Life Signals Time-Frequency Analysis
Resumo:
Isolation of a faulted segment, from either side of a fault, in a radial feeder that has several converter interfaced DGs is a challenging task when current sensing protective devices are employed. The protective device, even if it senses a downstream fault, may not operate if fault current level is low due to the current limiting operation of converters. In this paper, a new inverse type relay is introduced based on line admittance measurement to protect a distribution network, which has several converter interfaced DGs. The basic operation of this relay, its grading and reach settings are explained. Moreover a method is proposed to compensate the fault resistance such that the relay operation under this condition is reliable. Then designed relay performances are evaluated in a radial distribution network. The results are validated through PSCAD/EMTDC simulation and MATLAB calculations.
Resumo:
The concept of radar was developed for the estimation of the distance (range) and velocity of a target from a receiver. The distance measurement is obtained by measuring the time taken for the transmitted signal to propagate to the target and return to the receiver. The target's velocity is determined by measuring the Doppler induced frequency shift of the returned signal caused by the rate of change of the time- delay from the target. As researchers further developed conventional radar systems it become apparent that additional information was contained in the backscattered signal and that this information could in fact be used to describe the shape of the target itself. It is due to the fact that a target can be considered to be a collection of individual point scatterers, each of which has its own velocity and time- delay. DelayDoppler parameter estimation of each of these point scatterers thus corresponds to a mapping of the target's range and cross range, thus producing an image of the target. Much research has been done in this area since the early radar imaging work of the 1960s. At present there are two main categories into which radar imaging falls. The first of these is related to the case where the backscattered signal is considered to be deterministic. The second is related to the case where the backscattered signal is of a stochastic nature. In both cases the information which describes the target's scattering function is extracted by the use of the ambiguity function, a function which correlates the backscattered signal in time and frequency with the transmitted signal. In practical situations, it is often necessary to have the transmitter and the receiver of the radar system sited at different locations. The problem in these situations is 'that a reference signal must then be present in order to calculate the ambiguity function. This causes an additional problem in that detailed phase information about the transmitted signal is then required at the receiver. It is this latter problem which has led to the investigation of radar imaging using time- frequency distributions. As will be shown in this thesis, the phase information about the transmitted signal can be extracted from the backscattered signal using time- frequency distributions. The principle aim of this thesis was in the development, and subsequent discussion into the theory of radar imaging, using time- frequency distributions. Consideration is first given to the case where the target is diffuse, ie. where the backscattered signal has temporal stationarity and a spatially white power spectral density. The complementary situation is also investigated, ie. where the target is no longer diffuse, but some degree of correlation exists between the time- frequency points. Computer simulations are presented to demonstrate the concepts and theories developed in the thesis. For the proposed radar system to be practically realisable, both the time- frequency distributions and the associated algorithms developed must be able to be implemented in a timely manner. For this reason an optical architecture is proposed. This architecture is specifically designed to obtain the required time and frequency resolution when using laser radar imaging. The complex light amplitude distributions produced by this architecture have been computer simulated using an optical compiler.