943 resultados para Aqueous two-phase polymer systems
Resumo:
With the rapid growth of the Internet, computer attacks are increasing at a fast pace and can easily cause millions of dollar in damage to an organization. Detecting these attacks is an important issue of computer security. There are many types of attacks and they fall into four main categories, Denial of Service (DoS) attacks, Probe, User to Root (U2R) attacks, and Remote to Local (R2L) attacks. Within these categories, DoS and Probe attacks continuously show up with greater frequency in a short period of time when they attack systems. They are different from the normal traffic data and can be easily separated from normal activities. On the contrary, U2R and R2L attacks are embedded in the data portions of the packets and normally involve only a single connection. It becomes difficult to achieve satisfactory detection accuracy for detecting these two attacks. Therefore, we focus on studying the ambiguity problem between normal activities and U2R/R2L attacks. The goal is to build a detection system that can accurately and quickly detect these two attacks. In this dissertation, we design a two-phase intrusion detection approach. In the first phase, a correlation-based feature selection algorithm is proposed to advance the speed of detection. Features with poor prediction ability for the signatures of attacks and features inter-correlated with one or more other features are considered redundant. Such features are removed and only indispensable information about the original feature space remains. In the second phase, we develop an ensemble intrusion detection system to achieve accurate detection performance. The proposed method includes multiple feature selecting intrusion detectors and a data mining intrusion detector. The former ones consist of a set of detectors, and each of them uses a fuzzy clustering technique and belief theory to solve the ambiguity problem. The latter one applies data mining technique to automatically extract computer users’ normal behavior from training network traffic data. The final decision is a combination of the outputs of feature selecting and data mining detectors. The experimental results indicate that our ensemble approach not only significantly reduces the detection time but also effectively detect U2R and R2L attacks that contain degrees of ambiguous information.
Resumo:
The introduction of phase change material fluid and nanofluid in micro-channel heat sink design can significantly increase the cooling capacity of the heat sink because of the unique features of these two kinds of fluids. To better assist the design of a high performance micro-channel heat sink using phase change fluid and nanofluid, the heat transfer enhancement mechanism behind the flow with such fluids must be completely understood. ^ A detailed parametric study is conducted to further investigate the heat transfer enhancement of the phase change material particle suspension flow, by using the two-phase non-thermal-equilibrium model developed by Hao and Tao (2004). The parametric study is conducted under normal conditions with Reynolds numbers of Re = 90–600 and phase change material particle concentrations of ϵp ≤ 0.25, as well as extreme conditions of very low Reynolds numbers (Re < 50) and high phase change material particle concentration (ϵp = 50%–70%) slurry flow. By using the two newly-defined parameters, named effectiveness factor ϵeff and performance index PI, respectively, it is found that there exists an optimal relation between the channel design parameters L and D, particle volume fraction ϵp, Reynolds number Re, and the wall heat flux qw. The influence of the particle volume fraction ϵp, particle size dp, and the particle viscosity μ p, to the phase change material suspension flow, are investigated and discussed. The model was validated by available experimental data. The conclusions will assist designers in making their decisions that relate to the design or selection of a micro-pump suitable for micro or mini scale heat transfer devices. ^ To understand the heat transfer enhancement mechanism of the nanofluid flow from the particle level, the lattice Boltzmann method is used because of its mesoscopic feature and its many numerical advantages. By using a two-component lattice Boltzmann model, the heat transfer enhancement of the nanofluid is analyzed, through incorporating the different forces acting on the nanoparticles to the two-component lattice Boltzmann model. It is found that the nanofluid has better heat transfer enhancement at low Reynolds numbers, and the Brownian motion effect of the nanoparticles will be weakened by the increase of flow speed. ^
Resumo:
With the rapid growth of the Internet, computer attacks are increasing at a fast pace and can easily cause millions of dollar in damage to an organization. Detecting these attacks is an important issue of computer security. There are many types of attacks and they fall into four main categories, Denial of Service (DoS) attacks, Probe, User to Root (U2R) attacks, and Remote to Local (R2L) attacks. Within these categories, DoS and Probe attacks continuously show up with greater frequency in a short period of time when they attack systems. They are different from the normal traffic data and can be easily separated from normal activities. On the contrary, U2R and R2L attacks are embedded in the data portions of the packets and normally involve only a single connection. It becomes difficult to achieve satisfactory detection accuracy for detecting these two attacks. Therefore, we focus on studying the ambiguity problem between normal activities and U2R/R2L attacks. The goal is to build a detection system that can accurately and quickly detect these two attacks. In this dissertation, we design a two-phase intrusion detection approach. In the first phase, a correlation-based feature selection algorithm is proposed to advance the speed of detection. Features with poor prediction ability for the signatures of attacks and features inter-correlated with one or more other features are considered redundant. Such features are removed and only indispensable information about the original feature space remains. In the second phase, we develop an ensemble intrusion detection system to achieve accurate detection performance. The proposed method includes multiple feature selecting intrusion detectors and a data mining intrusion detector. The former ones consist of a set of detectors, and each of them uses a fuzzy clustering technique and belief theory to solve the ambiguity problem. The latter one applies data mining technique to automatically extract computer users’ normal behavior from training network traffic data. The final decision is a combination of the outputs of feature selecting and data mining detectors. The experimental results indicate that our ensemble approach not only significantly reduces the detection time but also effectively detect U2R and R2L attacks that contain degrees of ambiguous information.
Resumo:
Contamination of soil, sediment and groundwater by hydrophobic organic compounds (HOCs) is a matter of growing concern because groundwater is a valuable and limited resource, and because such contamination is difficult to address. This investigation involved an experimental evaluation of the addition of several surfactant solutions to aqueous and soil-water systems contaminated with phenanthrene, a selected HOC. The results are presented in terms of: * phenanthrene solubilization achieved through surfactant addition * observed effects of surfactant addition on the mineralization of phenanthrene * estimation of relative toxicities of various surfactants using toxicity assays * literature-reported biodegradability/persistence of selected surfactants * surfactant sorption/precipitation onto soil and its impacts on proposed use of surfactant-amended remediation Surfactants were observed to facilitate the transfer of phenanthrene from the soil-sorbed phase to the aqueous pseudophase, however, surfactant solubilization did not translate into enhanced phenanthrene biodegradation.
Resumo:
Category hierarchy is an abstraction mechanism for efficiently managing large-scale resources. In an open environment, a category hierarchy will inevitably become inappropriate for managing resources that constantly change with unpredictable pattern. An inappropriate category hierarchy will mislead the management of resources. The increasing dynamicity and scale of online resources increase the requirement of automatically maintaining category hierarchy. Previous studies about category hierarchy mainly focus on either the generation of category hierarchy or the classification of resources under a pre-defined category hierarchy. The automatic maintenance of category hierarchy has been neglected. Making abstraction among categories and measuring the similarity between categories are two basic behaviours to generate a category hierarchy. Humans are good at making abstraction but limited in ability to calculate the similarities between large-scale resources. Computing models are good at calculating the similarities between large-scale resources but limited in ability to make abstraction. To take both advantages of human view and computing ability, this paper proposes a two-phase approach to automatically maintaining category hierarchy within two scales by detecting the internal pattern change of categories. The global phase clusters resources to generate a reference category hierarchy and gets similarity between categories to detect inappropriate categories in the initial category hierarchy. The accuracy of the clustering approaches in generating category hierarchy determines the rationality of the global maintenance. The local phase detects topical changes and then adjusts inappropriate categories with three local operations. The global phase can quickly target inappropriate categories top-down and carry out cross-branch adjustment, which can also accelerate the local-phase adjustments. The local phase detects and adjusts the local-range inappropriate categories that are not adjusted in the global phase. By incorporating the two complementary phase adjustments, the approach can significantly improve the topical cohesion and accuracy of category hierarchy. A new measure is proposed for evaluating category hierarchy considering not only the balance of the hierarchical structure but also the accuracy of classification. Experiments show that the proposed approach is feasible and effective to adjust inappropriate category hierarchy. The proposed approach can be used to maintain the category hierarchy for managing various resources in dynamic application environment. It also provides an approach to specialize the current online category hierarchy to organize resources with more specific categories.
Resumo:
The introduction of phase change material fluid and nanofluid in micro-channel heat sink design can significantly increase the cooling capacity of the heat sink because of the unique features of these two kinds of fluids. To better assist the design of a high performance micro-channel heat sink using phase change fluid and nanofluid, the heat transfer enhancement mechanism behind the flow with such fluids must be completely understood. A detailed parametric study is conducted to further investigate the heat transfer enhancement of the phase change material particle suspension flow, by using the two-phase non-thermal-equilibrium model developed by Hao and Tao (2004). The parametric study is conducted under normal conditions with Reynolds numbers of Re=600-900 and phase change material particle concentrations ¡Ü0.25 , as well as extreme conditions of very low Reynolds numbers (Re < 50) and high phase change material particle concentration (0.5-0.7) slurry flow. By using the two newly-defined parameters, named effectiveness factor and performance index, respectively, it is found that there exists an optimal relation between the channel design parameters, particle volume fraction, Reynolds number, and the wall heat flux. The influence of the particle volume fraction, particle size, and the particle viscosity, to the phase change material suspension flow, are investigated and discussed. The model was validated by available experimental data. The conclusions will assist designers in making their decisions that relate to the design or selection of a micro-pump suitable for micro or mini scale heat transfer devices. To understand the heat transfer enhancement mechanism of the nanofluid flow from the particle level, the lattice Boltzmann method is used because of its mesoscopic feature and its many numerical advantages. By using a two-component lattice Boltzmann model, the heat transfer enhancement of the nanofluid is analyzed, through incorporating the different forces acting on the nanoparticles to the two-component lattice Boltzmann model. It is found that the nanofluid has better heat transfer enhancement at low Reynolds numbers, and the Brownian motion effect of the nanoparticles will be weakened by the increase of flow speed.
Resumo:
This paper develops an integrated optimal power flow (OPF) tool for distribution networks in two spatial scales. In the local scale, the distribution network, the natural gas network, and the heat system are coordinated as a microgrid. In the urban scale, the impact of natural gas network is considered as constraints for the distribution network operation. The proposed approach incorporates unbalance three-phase electrical systems, natural gas systems, and combined cooling, heating, and power systems. The interactions among the above three energy systems are described by energy hub model combined with components capacity constraints. In order to efficiently accommodate the nonlinear constraint optimization problem, particle swarm optimization algorithm is employed to set the control variables in the OPF problem. Numerical studies indicate that by using the OPF method, the distribution network can be economically operated. Also, the tie-line power can be effectively managed.
Resumo:
Food production and consumption for cities has become a global concern due to increasing numbers of people living in urban areas, threatening food security. There is the contention that people living in cities have become disconnected with food production, leading to reduced nutrition in diets and increased food waste. Integrating food production into cities (urban agriculture) can help alleviate some of these issues. Lack of space at ground level in high-density urban areas has accelerated the idea of using spare building surfaces for food production. There are various growing methods being used for food production on buildings, which can be split into two main types, soil-less systems and soil-based systems. This paper is a holistic assessment (underpinned by the triple bottom line of sustainable development) of these two types of systems for food production on buildings, looking at the benefits and limitation of each type in this context. The results illustrate that soil-less systems are more productive per square metre, which increases the amount of locally grown, fresh produce available in urban areas. The results also show that soil-based systems for cultivation on buildings are more environmentally and socially beneficial overall for urban areas than soil-less systems.
Resumo:
The FIREDASS (FIRE Detection And Suppression Simulation) project is concerned with the development of fine water mist systems as a possible replacement for the halon fire suppression system currently used in aircraft cargo holds. The project is funded by the European Commission, under the BRITE EURAM programme. The FIREDASS consortium is made up of a combination of Industrial, Academic, Research and Regulatory partners. As part of this programme of work, a computational model has been developed to help engineers optimise the design of the water mist suppression system. This computational model is based on Computational Fluid Dynamics (CFD) and is composed of the following components: fire model; mist model; two-phase radiation model; suppression model and detector/activation model. The fire model - developed by the University of Greenwich - uses prescribed release rates for heat and gaseous combustion products to represent the fire load. Typical release rates have been determined through experimentation conducted by SINTEF. The mist model - developed by the University of Greenwich - is a Lagrangian particle tracking procedure that is fully coupled to both the gas phase and the radiation field. The radiation model - developed by the National Technical University of Athens - is described using a six-flux radiation model. The suppression model - developed by SINTEF and the University of Greenwich - is based on an extinguishment crietrion that relies on oxygen concentration and temperature. The detector/ activation model - developed by Cerberus - allows the configuration of many different detector and mist configurations to be tested within the computational model. These sub-models have been integrated by the University of Greenwich into the FIREDASS software package. The model has been validated using data from the SINTEF/GEC test campaigns and it has been found that the computational model gives good agreement with these experimental results. The best agreement is obtained at the ceiling which is where the detectors and misting nozzles would be located in a real system. In this paper the model is briefly described and some results from the validation of the fire and mist model are presented.
Resumo:
Au cours des dernières années, la photonique intégrée sur silicium a progressé rapidement. Les modulateurs issus de cette technologie présentent des caractéristiques potentiellement intéressantes pour les systèmes de communication à courte portée. En effet, il est prévu que ces modulateurs pourront être opérés à des vitesses de transmission élevées, tout en limitant le coût de fabrication et la consommation de puissance. Parallèlement, la modulation d’amplitude multi-niveau (PAM) est prometteuse pour ce type de systèmes. Ainsi, ce travail porte sur le développement de modulateurs de silicium pour la transmission de signaux PAM. Dans le premier chapitre, les concepts théoriques nécessaires à la conception de modulateurs de silicium sont présentés. Les modulateurs Mach-Zehnder et les modulateurs à base de réseau de Bragg sont principalement abordés. De plus, les effets électro-optiques dans le silicium, la modulation PAM, les différents types d’électrodes intégrées et la compensation des distorsions par traitement du signal sont détaillés.Dans le deuxième chapitre, un modulateur Mach-Zehnder aux électrodes segmentées est présenté. La segmentation des électrodes permet la génération de signaux optiques PAM à partir de séquences binaires. Cette approche permet d’éliminer l’utilisation de convertisseur numérique-analogique en intégrant cette fonction dans le domaine optique, ce qui vise à réduire le coût du système de communication. Ce chapitre contient la description détaillée du modulateur, les résultats de caractérisation optique et de la caractérisation électrique, ainsi que les tests systèmes. De plus, les tests systèmes incluent l’utilisation de pré-compensation ou de post-compensation du signal sous la forme d’égalisation de la réponse en fréquence pour les formats de modulation PAM-4 et PAM-8 à différents taux binaires. Une vitesse de transmission de 30 Gb/s est démontrée dans les deux cas et ce malgré une limitation importante de la réponse en fréquence suite à l’ajout d’un assemblage des circuits radiofréquences (largeur de bande 3 dB de 8 GHz). Il s’agit de la première démonstration de modulation PAM-8 à l’aide d’un modulateur Mach-Zehnder aux électrodes segmentées. Finalement, les conclusions tirées de ce travail ont mené à la conception d’un deuxième modulateur Mach-Zehnder aux électrodes segmentées présentement en phase de test, dont les performances montrent un très grand potentiel. Dans le troisième chapitre, un modulateur à réseau de Bragg à deux sauts de phase est présenté. L’utilisation de réseaux de Bragg est une approche encore peu développée pour la modulation. En effet, la réponse spectrale de ces structures peut être contrôlée précisément, une caractéristique intéressante pour la conception de modulateurs. Dans ces travaux, nous proposons l’ajout de deux sauts de phase à un réseau de Bragg uniforme pour obtenir un pic de transmission dans la bande de réflexion de celui-ci. Ainsi, il est possible d’altérer l’amplitude du pic de transmission à l’aide d’une jonction pn. Comme pour le deuxième chapitre, ce chapitre inclut la description détaillée du modulateur, les résultats des caractérisations optique et électrique, ainsi que les tests systèmes. De plus, la caractérisation de jonctions pn à l’aide du modulateur à réseau de Bragg est expliquée. Des vitesses de transmission PAM-4 de 60 Gb/s et OOK de 55 Gb/s sont démontrées après la compensation des distorsions des signaux. À notre connaissance, il s’agit du modulateur à réseau de Bragg le plus rapide à ce jour. De plus, pour la première fois, les performances d’un tel modulateur s’approchent de celles des modulateurs de silicium les plus rapides utilisant des microrésonateurs en anneau ou des interféromètres Mach-Zehnder.
Resumo:
Practical application of flow boiling to ground- and space-based thermal management systems hinges on the ability to predict the system’s heat removal capabilities under expected operating conditions. Research in this field has shown that the heat transfer coefficient within two-phase heat exchangers can be largely dependent on the experienced flow regime. This finding has inspired an effort to develop mechanistic heat transfer models for each flow pattern which are likely to outperform traditional empirical correlations. As a contribution to the effort, this work aimed to identify the heat transfer mechanisms for the slug flow regime through analysis of individual Taylor bubbles. An experimental apparatus was developed to inject single vapor Taylor bubbles into co-currently flowing liquid HFE 7100. The heat transfer was measured as the bubble rose through a 6 mm inner diameter heated tube using an infrared thermography technique. High-speed flow visualization was obtained and the bubble film thickness measured in an adiabatic section. Experiments were conducted at various liquid mass fluxes (43-200 kg/m2s) and gravity levels (0.01g-1.8g) to characterize the effect of bubble drift velocity on the heat transfer mechanisms. Variable gravity testing was conducted during a NASA parabolic flight campaign. Results from the experiments showed that the drift velocity strongly affects the hydrodynamics and heat transfer of single elongated bubbles. At low gravity levels, bubbles exhibited shapes characteristic of capillary flows and the heat transfer enhancement due to the bubble was dominated by conduction through the thin film. At moderate to high gravity, traditional Taylor bubbles provided small values of enhancement within the film, but large peaks in the wake heat transfer occurred due to turbulent vortices induced by the film plunging into the trailing liquid slug. Characteristics of the wake heat transfer profiles were analyzed and related to the predicted velocity field. Results were compared and shown to agree with numerical simulations of colleagues from EPFL, Switzerland. In addition, a preliminary study was completed on the effect of a Taylor bubble passing through nucleate flow boiling, showing that the thinning thermal boundary layer within the film suppressed nucleation, thereby decreasing the heat transfer coefficient.
Resumo:
The study of green chemistry is dedicated to eliminating or reducing toxic waste. One route to accomplish this goal is to explore alternative reaction conditions and parameters resulting in the development of more benign synthetic routes and reagents. The primary focus of this research is to find optimal reaction conditions for the oxidation of a primary alcohol to an aldehyde. As a case study, the oxidation of benzyl alcohol to benzaldehyde, a common industrial process, was examined. Traditionally carried out using the Jones Reagent, commonly referred to as chromium (IV) oxide or chromium trioxide (CrO3) in sulphuric acid, a great deal of research went into utilizing less toxic reagents, such as MnO2 or KMnO4 supported on a clay base. This research has led to an improvement on these alternatives, using a lithium chloride (LiCl) catalyst in a montmorillonite K10 clay solid phase, together with the oxidizing agent hydrogen peroxide, as even greener alternatives to these traditional oxidizing agents. Experiments were carried out to determine the lifetime of this LiCl/clay system as compared to MnO2 and KMnO4, to investigate its ability to catalyze the oxidation of other aromatic alcohols (such as 4-methoxybenzyl alcohol and diphenylmethanol), and to further improve the system’s adherence to green chemistry principles. Green solvent alternatives were examined by replacing the toluene solvent with dimethylcarbonate (DMC), and reaction conditions were optimized to improve product yield. It was determined that the LiCl/H2O2 system was, in most cases, equally as effective at catalyzing the oxidation of benzyl alcohol to benzaldehyde. Although the catalyst and oxidizing agent eliminated the toxic waste generated from chromium reagents, it offered significant challenges in product isolation, because of an aqueous-organic phase separation.
Resumo:
Over the last decade, success of social networks has significantly reshaped how people consume information. Recommendation of contents based on user profiles is well-received. However, as users become dominantly mobile, little is done to consider the impacts of the wireless environment, especially the capacity constraints and changing channel. In this dissertation, we investigate a centralized wireless content delivery system, aiming to optimize overall user experience given the capacity constraints of the wireless networks, by deciding what contents to deliver, when and how. We propose a scheduling framework that incorporates content-based reward and deliverability. Our approach utilizes the broadcast nature of wireless communication and social nature of content, by multicasting and precaching. Results indicate this novel joint optimization approach outperforms existing layered systems that separate recommendation and delivery, especially when the wireless network is operating at maximum capacity. Utilizing limited number of transmission modes, we significantly reduce the complexity of the optimization. We also introduce the design of a hybrid system to handle transmissions for both system recommended contents ('push') and active user requests ('pull'). Further, we extend the joint optimization framework to the wireless infrastructure with multiple base stations. The problem becomes much harder in that there are many more system configurations, including but not limited to power allocation and how resources are shared among the base stations ('out-of-band' in which base stations transmit with dedicated spectrum resources, thus no interference; and 'in-band' in which they share the spectrum and need to mitigate interference). We propose a scalable two-phase scheduling framework: 1) each base station obtains delivery decisions and resource allocation individually; 2) the system consolidates the decisions and allocations, reducing redundant transmissions. Additionally, if the social network applications could provide the predictions of how the social contents disseminate, the wireless networks could schedule the transmissions accordingly and significantly improve the dissemination performance by reducing the delivery delay. We propose a novel method utilizing: 1) hybrid systems to handle active disseminating requests; and 2) predictions of dissemination dynamics from the social network applications. This method could mitigate the performance degradation for content dissemination due to wireless delivery delay. Results indicate that our proposed system design is both efficient and easy to implement.