872 resultados para Large-scale experiments
Resumo:
Strong convective events can produce extreme precipitation, hail, lightning or gusts, potentially inducing severe socio-economic impacts. These events have a relatively small spatial extension and, in most cases, a short lifetime. In this study, a model is developed for estimating convective extreme events based on large scale conditions. It is shown that strong convective events can be characterized by a Weibull distribution of radar-based rainfall with a low shape and high scale parameter value. A radius of 90km around a station reporting a convective situation turned out to be suitable. A methodology is developed to estimate the Weibull parameters and thus the occurrence probability of convective events from large scale atmospheric instability and enhanced near-surface humidity, which are usually found on a larger scale than the convective event itself. Here, the probability for the occurrence of extreme convective events is estimated from the KO-index indicating the stability, and relative humidity at 1000hPa. Both variables are computed from ERA-Interim reanalysis. In a first version of the methodology, these two variables are applied to estimate the spatial rainfall distribution and to estimate the occurrence of a convective event. The developed method shows significant skill in estimating the occurrence of convective events as observed at synoptic stations, lightning measurements, and severe weather reports. In order to take frontal influences into account, a scheme for the detection of atmospheric fronts is implemented. While generally higher instability is found in the vicinity of fronts, the skill of this approach is largely unchanged. Additional improvements were achieved by a bias-correction and the use of ERA-Interim precipitation. The resulting estimation method is applied to the ERA-Interim period (1979-2014) to establish a ranking of estimated convective extreme events. Two strong estimated events that reveal a frontal influence are analysed in detail. As a second application, the method is applied to GCM-based decadal predictions in the period 1979-2014, which were initialized every year. It is shown that decadal predictive skill for convective event frequencies over Germany is found for the first 3-4 years after the initialization.
Resumo:
Aim Positive regional correlations between biodiversity and human population have been detected for several taxonomic groups and geographical regions. Such correlations could have important conservation implications and have been mainly attributed to ecological factors, with little testing for an artefactual explanation: more populated regions may show higher biodiversity because they are more thoroughly surveyed. We tested the hypothesis that the correlation between people and herptile diversity in Europe is influenced by survey effort
Resumo:
The increasing integration of renewable energies in the electricity grid contributes considerably to achieve the European Union goals on energy and Greenhouse Gases (GHG) emissions reduction. However, it also brings problems to grid management. Large scale energy storage can provide the means for a better integration of the renewable energy sources, for balancing supply and demand, to increase energy security, to enhance a better management of the grid and also to converge towards a low carbon economy. Geological formations have the potential to store large volumes of fluids with minimal impact to environment and society. One of the ways to ensure a large scale energy storage is to use the storage capacity in geological reservoir. In fact, there are several viable technologies for underground energy storage, as well as several types of underground reservoirs that can be considered. The geological energy storage technologies considered in this research were: Underground Gas Storage (UGS), Hydrogen Storage (HS), Compressed Air Energy Storage (CAES), Underground Pumped Hydro Storage (UPHS) and Thermal Energy Storage (TES). For these different types of underground energy storage technologies there are several types of geological reservoirs that can be suitable, namely: depleted hydrocarbon reservoirs, aquifers, salt formations and caverns, engineered rock caverns and abandoned mines. Specific site screening criteria are applicable to each of these reservoir types and technologies, which determines the viability of the reservoir itself, and of the technology for any particular site. This paper presents a review of the criteria applied in the scope of the Portuguese contribution to the EU funded project ESTMAP – Energy Storage Mapping and Planning.
Resumo:
The correlations between the evolution of the Super Massive Black Holes (SMBHs) and their host galaxies suggests that the SMBH accretion on sub-pc scales (active galactice nuclei, AGN) is linked to the building of the galaxy over kpc scales, through the so called AGN feedback. Most of the galaxy assembly occurs in overdense large scale structures (LSSs). AGN residing in powerful sources in LSSs, such as the proto-brightest cluster galaxies (BCGs), can affect the evolution of the surrounding intra-cluster medium (ICM) and nearby galaxies. Among distant AGN, high-redshift radio-galaxies (HzRGs) are found to be excellent BCG progenitor candidates. In this Thesis we analyze novel interferometric observations of the so-called "J1030" field centered around the z = 6.3 SDSS Quasar J1030+0524, carried out with the Atacama large (sub-)millimetre array (ALMA) and the Jansky very large array (JVLA). This field host a LSS assembling around a powerful HzRG at z = 1.7 that shows evidence of positive AGN feedback in heating the surrounding ICM and promoting star-formation in multiple galaxies at hundreds kpc distances. We report the detection of gas-rich members of the LSS, including the HzRG. We showed that the LSS is going to evolve into a local massive cluster and the HzRG is the proto-BCG. we unveiled signatures of the proto-BCG's interaction with the surrounding ICM, strengthening the positive AGN feedback scenario. From the JVLA observations of the "J1030" we extracted one of the deepest extra-galactic radio surveys to date (~12.5 uJy at 5 sigma). Exploiting the synergy with the X-ray deep survey (~500 ks) we investigated the relation of the X-ray/radio emission of a X-ray-selected sample, unveiling that the radio emission is powered by different processes (star-formation and AGN), and that AGN-driven sample is mostly composed by radio-quiet objects that display a significant X-ray/radio correlation.
Resumo:
In this thesis we present the development and the current status of the IFrameNet project, aimed at the construction of a large-scale lexical semantic resource for the Italian language based on Frame Semantics theories. We will begin by contextualizing our work in the wider context of Frame Semantics and of the FrameNet project, which, since 1997, has attempted to apply these theories to lexicography. We will then analyse and discuss the applicability of the structure of the American resource to Italian and more specifically we will focus on the domain of fear, worry, and anxiety. We will finally propose some modifications aimed at improving this domain of the resource in relation to its coherence, its ability to accurately represent the linguistic reality and in particular in order to make it possible to apply it to Italian.
Resumo:
The main object of the present paper consists in giving formulas and methods which enable us to determine the minimum number of repetitions or of individuals necessary to garantee some extent the success of an experiment. The theoretical basis of all processes consists essentially in the following. Knowing the frequency of the desired p and of the non desired ovents q we may calculate the frequency of all possi- ble combinations, to be expected in n repetitions, by expanding the binomium (p-+q)n. Determining which of these combinations we want to avoid we calculate their total frequency, selecting the value of the exponent n of the binomium in such a way that this total frequency is equal or smaller than the accepted limit of precision n/pª{ 1/n1 (q/p)n + 1/(n-1)| (q/p)n-1 + 1/ 2!(n-2)| (q/p)n-2 + 1/3(n-3) (q/p)n-3... < Plim - -(1b) There does not exist an absolute limit of precision since its value depends not only upon psychological factors in our judgement, but is at the same sime a function of the number of repetitions For this reasen y have proposed (1,56) two relative values, one equal to 1-5n as the lowest value of probability and the other equal to 1-10n as the highest value of improbability, leaving between them what may be called the "region of doubt However these formulas cannot be applied in our case since this number n is just the unknown quantity. Thus we have to use, instead of the more exact values of these two formulas, the conventional limits of P.lim equal to 0,05 (Precision 5%), equal to 0,01 (Precision 1%, and to 0,001 (Precision P, 1%). The binominal formula as explained above (cf. formula 1, pg. 85), however is of rather limited applicability owing to the excessive calculus necessary, and we have thus to procure approximations as substitutes. We may use, without loss of precision, the following approximations: a) The normal or Gaussean distribution when the expected frequency p has any value between 0,1 and 0,9, and when n is at least superior to ten. b) The Poisson distribution when the expected frequecy p is smaller than 0,1. Tables V to VII show for some special cases that these approximations are very satisfactory. The praticai solution of the following problems, stated in the introduction can now be given: A) What is the minimum number of repititions necessary in order to avoid that any one of a treatments, varieties etc. may be accidentally always the best, on the best and second best, or the first, second, and third best or finally one of the n beat treatments, varieties etc. Using the first term of the binomium, we have the following equation for n: n = log Riim / log (m:) = log Riim / log.m - log a --------------(5) B) What is the minimun number of individuals necessary in 01der that a ceratin type, expected with the frequency p, may appaer at least in one, two, three or a=m+1 individuals. 1) For p between 0,1 and 0,9 and using the Gaussean approximation we have: on - ó. p (1-p) n - a -1.m b= δ. 1-p /p e c = m/p } -------------------(7) n = b + b² + 4 c/ 2 n´ = 1/p n cor = n + n' ---------- (8) We have to use the correction n' when p has a value between 0,25 and 0,75. The greek letters delta represents in the present esse the unilateral limits of the Gaussean distribution for the three conventional limits of precision : 1,64; 2,33; and 3,09 respectively. h we are only interested in having at least one individual, and m becomes equal to zero, the formula reduces to : c= m/p o para a = 1 a = { b + b²}² = b² = δ2 1- p /p }-----------------(9) n = 1/p n (cor) = n + n´ 2) If p is smaller than 0,1 we may use table 1 in order to find the mean m of a Poisson distribution and determine. n = m: p C) Which is the minimun number of individuals necessary for distinguishing two frequencies p1 and p2? 1) When pl and p2 are values between 0,1 and 0,9 we have: n = { δ p1 ( 1-pi) + p2) / p2 (1 - p2) n= 1/p1-p2 }------------ (13) n (cor) We have again to use the unilateral limits of the Gaussean distribution. The correction n' should be used if at least one of the valors pl or p2 has a value between 0,25 and 0,75. A more complicated formula may be used in cases where whe want to increase the precision : n (p1 - p2) δ { p1 (1- p2 ) / n= m δ = δ p1 ( 1 - p1) + p2 ( 1 - p2) c= m / p1 - p2 n = { b2 + 4 4 c }2 }--------- (14) n = 1/ p1 - p2 2) When both pl and p2 are smaller than 0,1 we determine the quocient (pl-r-p2) and procure the corresponding number m2 of a Poisson distribution in table 2. The value n is found by the equation : n = mg /p2 ------------- (15) D) What is the minimun number necessary for distinguishing three or more frequencies, p2 p1 p3. If the frequecies pl p2 p3 are values between 0,1 e 0,9 we have to solve the individual equations and sue the higest value of n thus determined : n 1.2 = {δ p1 (1 - p1) / p1 - p2 }² = Fiim n 1.2 = { δ p1 ( 1 - p1) + p1 ( 1 - p1) }² } -- (16) Delta represents now the bilateral limits of the : Gaussean distrioution : 1,96-2,58-3,29. 2) No table was prepared for the relatively rare cases of a comparison of threes or more frequencies below 0,1 and in such cases extremely high numbers would be required. E) A process is given which serves to solve two problemr of informatory nature : a) if a special type appears in n individuals with a frequency p(obs), what may be the corresponding ideal value of p(esp), or; b) if we study samples of n in diviuals and expect a certain type with a frequency p(esp) what may be the extreme limits of p(obs) in individual farmlies ? I.) If we are dealing with values between 0,1 and 0,9 we may use table 3. To solve the first question we select the respective horizontal line for p(obs) and determine which column corresponds to our value of n and find the respective value of p(esp) by interpolating between columns. In order to solve the second problem we start with the respective column for p(esp) and find the horizontal line for the given value of n either diretly or by approximation and by interpolation. 2) For frequencies smaller than 0,1 we have to use table 4 and transform the fractions p(esp) and p(obs) in numbers of Poisson series by multiplication with n. Tn order to solve the first broblem, we verify in which line the lower Poisson limit is equal to m(obs) and transform the corresponding value of m into frequecy p(esp) by dividing through n. The observed frequency may thus be a chance deviate of any value between 0,0... and the values given by dividing the value of m in the table by n. In the second case we transform first the expectation p(esp) into a value of m and procure in the horizontal line, corresponding to m(esp) the extreme values om m which than must be transformed, by dividing through n into values of p(obs). F) Partial and progressive tests may be recomended in all cases where there is lack of material or where the loss of time is less importent than the cost of large scale experiments since in many cases the minimun number necessary to garantee the results within the limits of precision is rather large. One should not forget that the minimun number really represents at the same time a maximun number, necessary only if one takes into consideration essentially the disfavorable variations, but smaller numbers may frequently already satisfactory results. For instance, by definition, we know that a frequecy of p means that we expect one individual in every total o(f1-p). If there were no chance variations, this number (1- p) will be suficient. and if there were favorable variations a smaller number still may yield one individual of the desired type. r.nus trusting to luck, one may start the experiment with numbers, smaller than the minimun calculated according to the formulas given above, and increase the total untill the desired result is obtained and this may well b ebefore the "minimum number" is reached. Some concrete examples of this partial or progressive procedure are given from our genetical experiments with maize.
Resumo:
High-throughput screening of cellular effects of RNA interference (RNAi) libraries is now being increasingly applied to explore the role of genes in specific cell biological processes and disease states. However, the technology is still limited to specialty laboratories, due to the requirements for robotic infrastructure, access to expensive reagent libraries, expertise in high-throughput screening assay development, standardization, data analysis and applications. In the future, alternative screening platforms will be required to expand functional large-scale experiments to include more RNAi constructs, allow combinatorial loss-of-function analyses (e.g. genegene or gene-drug interaction), gain-of-function screens, multi-parametric phenotypic readouts or comparative analysis of many different cell types. Such comprehensive perturbation of gene networks in cells will require a major increase in the flexibility of the screening platforms, throughput and reduction of costs. As an alternative for the conventional multi-well based high-throughput screening -platforms, here the development of a novel cell spot microarray method for production of high density siRNA reverse transfection arrays is described. The cell spot microarray platform is distinguished from the majority of other transfection cell microarray techniques by the spatially confined array layout that allow highly parallel screening of large-scale RNAi reagent libraries with assays otherwise difficult or not applicable to high-throughput screening. This study depicts the development of the cell spot microarray method along with biological application examples of high-content immunofluorescence and phenotype based cancer cell biological analyses focusing on the regulation of prostate cancer cell growth, maintenance of genomic integrity in breast cancer cells, and functional analysis of integrin protein-protein interactions in situ.
Resumo:
The n-tuple recognition method is briefly reviewed, summarizing the main theoretical results. Large-scale experiments carried out on Stat-Log project datasets confirm this method as a viable competitor to more popular methods due to its speed, simplicity, and accuracy on the majority of a wide variety of classification problems. A further investigation into the failure of the method on certain datasets finds the problem to be largely due to a mismatch between the scales which describe generalization and data sparseness.
Resumo:
The performance of building envelopes and roofing systems significantly depends on accurate knowledge of wind loads and the response of envelope components under realistic wind conditions. Wind tunnel testing is a well-established practice to determine wind loads on structures. For small structures much larger model scales are needed than for large structures, to maintain modeling accuracy and minimize Reynolds number effects. In these circumstances the ability to obtain a large enough turbulence integral scale is usually compromised by the limited dimensions of the wind tunnel meaning that it is not possible to simulate the low frequency end of the turbulence spectrum. Such flows are called flows with Partial Turbulence Simulation. In this dissertation, the test procedure and scaling requirements for tests in partial turbulence simulation are discussed. A theoretical method is proposed for including the effects of low-frequency turbulences in the post-test analysis. In this theory the turbulence spectrum is divided into two distinct statistical processes, one at high frequencies which can be simulated in the wind tunnel, and one at low frequencies which can be treated in a quasi-steady manner. The joint probability of load resulting from the two processes is derived from which full-scale equivalent peak pressure coefficients can be obtained. The efficacy of the method is proved by comparing predicted data derived from tests on large-scale models of the Silsoe Cube and Texas-Tech University buildings in Wall of Wind facility at Florida International University with the available full-scale data. For multi-layer building envelopes such as rain-screen walls, roof pavers, and vented energy efficient walls not only peak wind loads but also their spatial gradients are important. Wind permeable roof claddings like roof pavers are not well dealt with in many existing building codes and standards. Large-scale experiments were carried out to investigate the wind loading on concrete pavers including wind blow-off tests and pressure measurements. Simplified guidelines were developed for design of loose-laid roof pavers against wind uplift. The guidelines are formatted so that use can be made of the existing information in codes and standards such as ASCE 7-10 on pressure coefficients on components and cladding.
Resumo:
The performance of building envelopes and roofing systems significantly depends on accurate knowledge of wind loads and the response of envelope components under realistic wind conditions. Wind tunnel testing is a well-established practice to determine wind loads on structures. For small structures much larger model scales are needed than for large structures, to maintain modeling accuracy and minimize Reynolds number effects. In these circumstances the ability to obtain a large enough turbulence integral scale is usually compromised by the limited dimensions of the wind tunnel meaning that it is not possible to simulate the low frequency end of the turbulence spectrum. Such flows are called flows with Partial Turbulence Simulation.^ In this dissertation, the test procedure and scaling requirements for tests in partial turbulence simulation are discussed. A theoretical method is proposed for including the effects of low-frequency turbulences in the post-test analysis. In this theory the turbulence spectrum is divided into two distinct statistical processes, one at high frequencies which can be simulated in the wind tunnel, and one at low frequencies which can be treated in a quasi-steady manner. The joint probability of load resulting from the two processes is derived from which full-scale equivalent peak pressure coefficients can be obtained. The efficacy of the method is proved by comparing predicted data derived from tests on large-scale models of the Silsoe Cube and Texas-Tech University buildings in Wall of Wind facility at Florida International University with the available full-scale data.^ For multi-layer building envelopes such as rain-screen walls, roof pavers, and vented energy efficient walls not only peak wind loads but also their spatial gradients are important. Wind permeable roof claddings like roof pavers are not well dealt with in many existing building codes and standards. Large-scale experiments were carried out to investigate the wind loading on concrete pavers including wind blow-off tests and pressure measurements. Simplified guidelines were developed for design of loose-laid roof pavers against wind uplift. The guidelines are formatted so that use can be made of the existing information in codes and standards such as ASCE 7-10 on pressure coefficients on components and cladding.^
Resumo:
Unter Einfluss des gegenwärtigen Klimawandels sowie dem Anstieg der Erdbevölkerung ist der effiziente Umgang mit den vorhandenen Wasserressourcen ein zentrales Thema in der Bewässerungslandwirtschaft. Die Unterflurbewässerung ist ein seit Jahrtausenden bekanntes Verfahren, bei dem das Bewässerungswasser unterhalb der Erdoberfläche aufgebracht wird und der Pflanze direkt im Wurzelraum zur Verfügung steht. Die Gefäßbewässerung ist ein selbststeuerndes Verfahren wobei die Wassergabe aus dem unglasierten Tongefäß aufgrund der im umgebenden Boden anliegenden Saugspannung erfolgt. Diese wassersparende und dem Wasserbedarf der Pflanze angepasste Bewässerungsweise ist Grundlage der Überlegungen und Untersuchungen der vorliegenden Arbeit. Ausgehend von dem Gedanken der Kombination der Vorteile der Gefäßbewässerung mit denen moderner Materialien, die eine maschinelle Installation ermöglichen und eine lange Lebensdauer aufweisen, wurde die Entwicklung einer selbststeuernden Bewässerung verfolgt. Materialrecherchen und eine Auswahl als geeignet identifizierter Materialien führten zu Laboruntersuchungen an Hand derer die Darcy-Eigenschaften (linearer Zusammenhang zwischen Volumenstrom und anliegendem Druck) überprüft wurden. Membranen erwiesen sich danach als geeignet, so dass an einer ausgewählten Schlauchmembran weitergehende Untersuchungen zum Einsatz für die Bewässerung vorgenommen wurden. Diese umfassten Langzeitlaborversuche im Boden zur Untersuchung der Entwicklung der Durchlässigkeit der Schlauchmembran über die Zeit, sowie zum Nachweis der Selbststeuerung des Systems. Mit den gewonnenen Ergebnissen war es möglich großmaßstäbliche Versuche im Gewächshaus zu realisieren, bei der die Schlauchmembran im Vergleich zur Tropfbewässerung hinsichtlich Wasserverbrauch und Ernteertrag an Standorten in Deutschland und Algerien getestet wurde. Diese Gewächshausversuche zeigten vielversprechende Ergebnisse für die Membranschlauchbewässerung mit im Vergleich zur Tropfbewässerung erhöhten Erträgen bei gleichzeitig geringerem Wasserverbrauch. Dies wurde besonders bei Verwendung von Bewässerungswasser mit erhöhtem Salzgehalt deutlich. Sorgfältiger Beachtung bedarf die Wasserqualität, da die Membranschlauchbewässerung auf Wasserverunreinigungen mit Durchflussverminderung reagiert, die sich nachteilig auf die Pflanzenentwicklung auswirkt. Aufgabe weiterer Forschungen muss es demnach sein, technische Verfahren zu entwickeln, die die langfristige Leistungsfähigkeit der Membran im Bewässerungsbetrieb aufrechterhalten.
Resumo:
In this paper we present a set of field tests for detection of human in the water with an unmanned surface vehicle using infrared and color cameras. These experiments aimed to contribute in the development of victim target tracking and obstacle avoidance for unmanned surface vehicles operating in marine search and rescue missions. This research is integrated in the work conducted in the European FP7 research project Icarus aiming to develop robotic tools for large scale rescue operations. The tests consisted in the use of the ROAZ unmanned surface vehicle equipped with a precision GPS system for localization and both visible spectrum and IR cameras to detect the target. In the experimental setup, the test human target was deployed in the water wearing a life vest and a diver suit (thus having lower temperature signature in the body except hands and head) and was equipped with a GPS logger. Multiple target approaches were performed in order to test the system with different sun incidence relative angles. The experimental setup, detection method and preliminary results from the field trials performed in the summer of 2013 in Sesimbra, Portugal and in La Spezia, Italy are also presented in this work.
Resumo:
The results shown in this thesis are based on selected publications of the 2000s decade. The work was carried out in several national and EC funded public research projects and in close cooperation with industrial partners. The main objective of the thesis was to study and quantify the most important phenomena of circulating fluidized bed combustors by developing and applying proper experimental and modelling methods using laboratory scale equipments. An understanding of the phenomena plays an essential role in the development of combustion and emission performance, and the availability and controls of CFB boilers. Experimental procedures to study fuel combustion behaviour under CFB conditions are presented in the thesis. Steady state and dynamic measurements under well controlled conditions were carried out to produce the data needed for the development of high efficiency, utility scale CFB technology. The importance of combustion control and furnace dynamics is emphasized when CFB boilers are scaled up with a once through steam cycle. Qualitative information on fuel combustion characteristics was obtained directly by comparing flue gas oxygen responses during the impulse change experiments with fuel feed. A one-dimensional, time dependent model was developed to analyse the measurement data Emission formation was studied combined with fuel combustion behaviour. Correlations were developed for NO, N2O, CO and char loading, as a function of temperature and oxygen concentration in the bed area. An online method to characterize char loading under CFB conditions was developed and validated with the pilot scale CFB tests. Finally, a new method to control air and fuel feeds in CFB combustion was introduced. The method is based on models and an analysis of the fluctuation of the flue gas oxygen concentration. The effect of high oxygen concentrations on fuel combustion behaviour was also studied to evaluate the potential of CFB boilers to apply oxygenfiring technology to CCS. In future studies, it will be necessary to go through the whole scale up chain from laboratory phenomena devices through pilot scale test rigs to large scale, commercial boilers in order to validate the applicability and scalability of the, results. This thesis shows the chain between the laboratory scale phenomena test rig (bench scale) and the CFB process test rig (pilot). CFB technology has been scaled up successfully from an industrial scale to a utility scale during the last decade. The work shown in the thesis, for its part, has supported the development by producing new detailed information on combustion under CFB conditions.
Resumo:
The knowledge of the slug flow characteristics is very important when designing pipelines and process equipment. When the intermittences typical in slug flow occurs, the fluctuations of the flow variables bring additional concern to the designer. Focusing on this subject the present work discloses the experimental data on slug flow characteristics occurring in a large-size, large-scale facility. The results were compared with data provided by mechanistic slug flow models in order to verify their reliability when modelling actual flow conditions. Experiments were done with natural gas and oil or water as the liquid phase. To compute the frequency and velocity of the slug cell and to calculate the length of the elongated bubble and liquid slug one used two pressure transducers measuring the pressure drop across the pipe diameter at different axial locations. A third pressure transducer measured the pressure drop between two axial location 200 m apart. The experimental data were compared with results of Camargo's1 algorithm (1991, 1993), which uses the basics of Dukler & Hubbard's (1975) slug flow model, and those calculated by the transient two-phase flow simulator OLGA.
Resumo:
Complex networks have recently attracted a significant amount of research attention due to their ability to model real world phenomena. One important problem often encountered is to limit diffusive processes spread over the network, for example mitigating pandemic disease or computer virus spread. A number of problem formulations have been proposed that aim to solve such problems based on desired network characteristics, such as maintaining the largest network component after node removal. The recently formulated critical node detection problem aims to remove a small subset of vertices from the network such that the residual network has minimum pairwise connectivity. Unfortunately, the problem is NP-hard and also the number of constraints is cubic in number of vertices, making very large scale problems impossible to solve with traditional mathematical programming techniques. Even many approximation algorithm strategies such as dynamic programming, evolutionary algorithms, etc. all are unusable for networks that contain thousands to millions of vertices. A computationally efficient and simple approach is required in such circumstances, but none currently exist. In this thesis, such an algorithm is proposed. The methodology is based on a depth-first search traversal of the network, and a specially designed ranking function that considers information local to each vertex. Due to the variety of network structures, a number of characteristics must be taken into consideration and combined into a single rank that measures the utility of removing each vertex. Since removing a vertex in sequential fashion impacts the network structure, an efficient post-processing algorithm is also proposed to quickly re-rank vertices. Experiments on a range of common complex network models with varying number of vertices are considered, in addition to real world networks. The proposed algorithm, DFSH, is shown to be highly competitive and often outperforms existing strategies such as Google PageRank for minimizing pairwise connectivity.