970 resultados para large truck crash causation study


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Assessing how the University of the West of England (UWE) made large savings and improved the student experience through cloud email.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Species composition, biomass, density, and diversity of benthic invertebrates from six bard-bottom areas were evaluated. Seasonal collections using a dredge, trawl, and suction and grab samplers yielded 432, 525, and 845 taxa, respectively. Based on collections wltb the different gear types, species composition of invertebrates was found to change bathymetrically. Inner- and mlddle-shelf sites were more similar to each other in terms of invertebrate species composition than they were to outer-shelf sites, regardless of season. Sites on the inner and outer shelf were grouped according to latitude; however, results suggest that depth is apparently a more important determinant of invertebrate species composition than either season or latitude. Sponges generally dominated dredge and trawl collections in terms of biomass. Generally, cnidarians, bryozoans, and sponges dominated at sites In terms of number of taxa collected. The most abundant smaller macrofauna collected in suction and grab samples were polychaetes, amphipods, and mollusks. Densities of the numerically dominant species changed botb seasonally and bathymetrically, with very few of these species restricted to a specific bathymetrlc zone. The high diversity of invertebrates from hard-bottom sites is attributed to the large number of rare species. No consistent seasonal changes in diversity or number of species were noted for individual stations or depth zones. In addition, H and its components showed no definite patterns related to depth or latitude. However, more species were collected at middle-shelf sites than at inner- or outer-shelf sites, which may be related to more stable bottom temperature or greater habitat complexity in that area. (PDF file contains 110 pages.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

After reviewing the rather thin literature on the subject, we investigate the relationship between aquaculture and poverty based on a case study of five coastal communities in the Philippines. The analysis relies on a data set collated through a questionnaire survey of 148 households randomly selected in these five communities. The methodological approach combines the qualitative analysis of how this relationship is perceived by the surveyed households and a quantitative analysis of the levels and determinants of poverty and inequality in these communities. There is overwhelming evidence that aquaculture benefits the poor in important ways and that it is perceived very positively by the poor and non-poor alike. In particular, the poor derive a relatively larger share of their income from aquaculture than the rich, and a lowering of the poverty line only reinforces this result. Further, a Gini decomposition exercise shows unambiguously that aquaculture represents an inequality-reducing source of income. We believe that the pro-poor character of brackish water aquaculture in the study areas is explained by the fact that the sector provides employment to a large number of unskilled workers in communities characterized by large surpluses of labour. Our results also suggest that the analysis of the relationship between aquaculture and poverty should not focus exclusively on the socio-economic status of the farm operator/owner, as has often been the case in the past. [PDF contains 51 pages]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The common 2652 6N del variant in the CASP8 promoter (rs3834129) has been described as a putative low-penetrance risk factor for different cancer types. In particular, some studies suggested that the deleted allele (del) was inversely associated with CRC risk while other analyses failed to confirm this. Hence, to better understand the role of this variant in the risk of developing CRC, we performed a multi-centric case-control study. In the study, the variant 2652 6N del was genotyped in a total of 6,733 CRC cases and 7,576 controls recruited by six different centers located in Spain, Italy, USA, England, Czech Republic and the Netherlands collaborating to the international consortium COGENT (COlorectal cancer GENeTics). Our analysis indicated that rs3834129 was not associated with CRC risk in the full data set. However, the del allele was under-represented in one set of cases with a family history of CRC (per allele model OR = 0.79, 95% CI = 0.69-0.90) suggesting this allele might be a protective factor versus familial CRC. Since this multi-centric case-control study was performed on a very large sample size, it provided robust clarification of the effect of rs3834129 on the risk of developing CRC in Caucasians.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the hybrid approach of large-eddy simulation (LES) and Lighthill’s acoustic analogy for turbulence-generated sound, the turbulence source fields are obtained using an LES and the turbulence-generated sound at far fields is calculated from Lighthill’s acoustic analogy. As only the velocity fields at resolved scales are available from the LES, the Lighthill stress tensor, serving as a source term in Lighthill’s acoustic equation, has to be evaluated from the resolved velocity fields. As a result, the contribution from the unresolved velocity fields is missing in the conventional LES. The sound of missing scales is shown to be important and hence needs to be modeled. The present study proposes a kinematic subgrid-scale (SGS) model which recasts the unresolved velocity fields into Lighthill’s stress tensors. A kinematic simulation is used to construct the unresolved velocity fields with the imposed temporal statistics, which is consistent with the random sweeping hypothesis. The kinematic SGS model is used to calculate sound power spectra from isotropic turbulence and yields an improved result: the missing portion of the sound power spectra is approximately recovered in the LES.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large-eddy simulation (LES) has emerged as a promising tool for simulating turbulent flows in general and, in recent years,has also been applied to the particle-laden turbulence with some success (Kassinos et al., 2007). The motion of inertial particles is much more complicated than fluid elements, and therefore, LES of turbulent flow laden with inertial particles encounters new challenges. In the conventional LES, only large-scale eddies are explicitly resolved and the effects of unresolved, small or subgrid scale (SGS) eddies on the large-scale eddies are modeled. The SGS turbulent flow field is not available. The effects of SGS turbulent velocity field on particle motion have been studied by Wang and Squires (1996), Armenio et al. (1999), Yamamoto et al. (2001), Shotorban and Mashayek (2006a,b), Fede and Simonin (2006), Berrouk et al. (2007), Bini and Jones (2008), and Pozorski and Apte (2009), amongst others. One contemporary method to include the effects of SGS eddies on inertial particle motions is to introduce a stochastic differential equation (SDE), that is, a Langevin stochastic equation to model the SGS fluid velocity seen by inertial particles (Fede et al., 2006; Shotorban and Mashayek, 2006a; Shotorban and Mashayek, 2006b; Berrouk et al., 2007; Bini and Jones, 2008; Pozorski and Apte, 2009).However, the accuracy of such a Langevin equation model depends primarily on the prescription of the SGS fluid velocity autocorrelation time seen by an inertial particle or the inertial particle–SGS eddy interaction timescale (denoted by $\delt T_{Lp}$ and a second model constant in the diffusion term which controls the intensity of the random force received by an inertial particle (denoted by C_0, see Eq. (7)). From the theoretical point of view, dTLp differs significantly from the Lagrangian fluid velocity correlation time (Reeks, 1977; Wang and Stock, 1993), and this carries the essential nonlinearity in the statistical modeling of particle motion. dTLp and C0 may depend on the filter width and particle Stokes number even for a given turbulent flow. In previous studies, dTLp is modeled either by the fluid SGS Lagrangian timescale (Fede et al., 2006; Shotorban and Mashayek, 2006b; Pozorski and Apte, 2009; Bini and Jones, 2008) or by a simple extension of the timescale obtained from the full flow field (Berrouk et al., 2007). In this work, we shall study the subtle and on-monotonic dependence of $\delt T_{Lp}$ on the filter width and particle Stokes number using a flow field obtained from Direct Numerical Simulation (DNS). We then propose an empirical closure model for $\delta T_{Lp}$. Finally, the model is validated against LES of particle-laden turbulence in predicting single-particle statistics such as particle kinetic energy. As a first step, we consider the particle motion under the one-way coupling assumption in isotropic turbulent flow and neglect the gravitational settling effect. The one-way coupling assumption is only valid for low particle mass loading.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, multi-hole cooling is studied for an oxide/oxide ceramic specimen with normal injection holes and for a SiC/SiC ceramic specimen with oblique injection holes. A special purpose heat transfer tunnel was designed and built, which can provide a wide range of Reynolds numbers (10(5)similar to 10(7)) and a large temperature ratio of the primary flow to the coolant (up to 2.5). Cooling effectiveness determined by the measured surface temperature for the two types of ceramic specimens is investigated. It is found that the multi-hole cooling system for both specimens has a high cooling efficiency and it is higher for the SiC/SiC specimen than for the oxide/oxide specimen. Effects on the cooling effectiveness of parameters including blowing ratio, Reynolds number and temperature ratio, are studied. In addition, profiles of the mean velocity and temperature above the cooling surface are measured to provide further understanding of the cooling process. Duplication of the key parameters for multi-hole cooling, for a representative combustor flow condition (without radiation effects), is achieved with parameter scaling and the results show the high efficiency of multi-hole cooling for the oblique hole, SiC/SiC specimen. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Unlike most previous studies on the transverse vortex-induced vibration(VIV) of a cylinder mainly under the wallfree condition (Williamson & Govardhan,2004),this paper experimentally investigates the vortex-induced vibration of a cylinder with two degrees of freedom near a rigid wall exposed to steady flow.The amplitude and frequency responses of the cylinder are discussed.The lee wake flow patterns of the cylinder undergoing VIV were visualized by employing the hydrogen bubble technique.The effects of the gap-to-diameter ratio (e0/D) and the mass ratio on the vibration amplitude and frequency are analyzed.Comparisons of VIV response of the cylinder are made between one degree (only transverse) and two degrees of freedom (streamwise and transverse) and those between the present study and previous ones.The experimental observation indicates that there are two types of streamwise vibration,i.e.the first streamwise vibration (FSV) with small amplitude and the second streamwise vibration (SSV) which coexists with transverse vibration.The vortex shedding pattem for the FSV is approximately symmetric and that for the SSV is alternate.The first streamwise vibration tends to disappear with the decrease of e0/D.For the case of large gap-to-diameter ratios (e.g.e0/D = 0.54~1.58),the maximum amplitudes of the second streamwise vibration and transverse one increase with the increasing gapto-diameter ratio.But for the case of small gap-to-diameter ratios (e.g.e0/D = 0.16,0.23),the vibration amplitude of the cylinder increases slowly at the initial stage (i.e.at small reduced velocity V,),and across the maximum amplitude it decreases quickly at the last stage (i.e.at large Vr).Within the range ofthe examined small mass ratio (m<4),both streamwise and transverse vibration amplitude of the cylinder decrease with the increase of mass ratio for the fixed value of V,.The vibration range (in terms of Vr ) tends to widen with the decrease of the mass ratio.In the second streamwise vibration region,the vibration frequency of the cylinder with a small mass ratio (e.g.mx = 1.44) undergoes a jump at a certain Vr,.The maximum amplitudes of the transverse vibration for two-degree-of-freedom case is larger than that for one-degree-of-freedom case,but the transverse vibration frequency of the cylinder with two degrees of freedom is lower than that with one degree of freedom (transverse).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In response to infection or tissue dysfunction, immune cells develop into highly heterogeneous repertoires with diverse functions. Capturing the full spectrum of these functions requires analysis of large numbers of effector molecules from single cells. However, currently only 3-5 functional proteins can be measured from single cells. We developed a single cell functional proteomics approach that integrates a microchip platform with multiplex cell purification. This approach can quantitate 20 proteins from >5,000 phenotypically pure single cells simultaneously. With a 1-million fold miniaturization, the system can detect down to ~100 molecules and requires only ~104 cells. Single cell functional proteomic analysis finds broad applications in basic, translational and clinical studies. In the three studies conducted, it yielded critical insights for understanding clinical cancer immunotherapy, inflammatory bowel disease (IBD) mechanism and hematopoietic stem cell (HSC) biology.

To study phenotypically defined cell populations, single cell barcode microchips were coupled with upstream multiplex cell purification based on up to 11 parameters. Statistical algorithms were developed to process and model the high dimensional readouts. This analysis evaluates rare cells and is versatile for various cells and proteins. (1) We conducted an immune monitoring study of a phase 2 cancer cellular immunotherapy clinical trial that used T-cell receptor (TCR) transgenic T cells as major therapeutics to treat metastatic melanoma. We evaluated the functional proteome of 4 antigen-specific, phenotypically defined T cell populations from peripheral blood of 3 patients across 8 time points. (2) Natural killer (NK) cells can play a protective role in chronic inflammation and their surface receptor – killer immunoglobulin-like receptor (KIR) – has been identified as a risk factor of IBD. We compared the functional behavior of NK cells that had differential KIR expressions. These NK cells were retrieved from the blood of 12 patients with different genetic backgrounds. (3) HSCs are the progenitors of immune cells and are thought to have no immediate functional capacity against pathogen. However, recent studies identified expression of Toll-like receptors (TLRs) on HSCs. We studied the functional capacity of HSCs upon TLR activation. The comparison of HSCs from wild-type mice against those from genetics knock-out mouse models elucidates the responding signaling pathway.

In all three cases, we observed profound functional heterogeneity within phenotypically defined cells. Polyfunctional cells that conduct multiple functions also produce those proteins in large amounts. They dominate the immune response. In the cancer immunotherapy, the strong cytotoxic and antitumor functions from transgenic TCR T cells contributed to a ~30% tumor reduction immediately after the therapy. However, this infused immune response disappeared within 2-3 weeks. Later on, some patients gained a second antitumor response, consisted of the emergence of endogenous antitumor cytotoxic T cells and their production of multiple antitumor functions. These patients showed more effective long-term tumor control. In the IBD mechanism study, we noticed that, compared with others, NK cells expressing KIR2DL3 receptor secreted a large array of effector proteins, such as TNF-α, CCLs and CXCLs. The functions from these cells regulated disease-contributing cells and protected host tissues. Their existence correlated with IBD disease susceptibility. In the HSC study, the HSCs exhibited functional capacity by producing TNF-α, IL-6 and GM-CSF. TLR stimulation activated the NF-κB signaling in HSCs. Single cell functional proteome contains rich information that is independent from the genome and transcriptome. In all three cases, functional proteomic evaluation uncovered critical biological insights that would not be resolved otherwise. The integrated single cell functional proteomic analysis constructed a detail kinetic picture of the immune response that took place during the clinical cancer immunotherapy. It revealed concrete functional evidence that connected genetics to IBD disease susceptibility. Further, it provided predictors that correlated with clinical responses and pathogenic outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, a method to retrieve the source finiteness, depth of faulting, and the mechanisms of large earthquakes from long-period surface waves is developed and applied to several recent large events.

In Chapter 1, source finiteness parameters of eleven large earthquakes were determined from long-period Rayleigh waves recorded at IDA and GDSN stations. The basic data set is the seismic spectra of periods from 150 to 300 sec. Two simple models of source finiteness are studied. The first model is a point source with finite duration. In the determination of the duration or source-process times, we used Furumoto's phase method and a linear inversion method, in which we simultaneously inverted the spectra and determined the source-process time that minimizes the error in the inversion. These two methods yielded consistent results. The second model is the finite fault model. Source finiteness of large shallow earthquakes with rupture on a fault plane with a large aspect ratio was modeled with the source-finiteness function introduced by Ben-Menahem. The spectra were inverted to find the extent and direction of the rupture of the earthquake that minimize the error in the inversion. This method is applied to the 1977 Sumbawa, Indonesia, 1979 Colombia-Ecuador, 1983 Akita-Oki, Japan, 1985 Valparaiso, Chile, and 1985 Michoacan, Mexico earthquakes. The method yielded results consistent with the rupture extent inferred from the aftershock area of these earthquakes.

In Chapter 2, the depths and source mechanisms of nine large shallow earthquakes were determined. We inverted the data set of complex source spectra for a moment tensor (linear) or a double couple (nonlinear). By solving a least-squares problem, we obtained the centroid depth or the extent of the distributed source for each earthquake. The depths and source mechanisms of large shallow earthquakes determined from long-period Rayleigh waves depend on the models of source finiteness, wave propagation, and the excitation. We tested various models of the source finiteness, Q, the group velocity, and the excitation in the determination of earthquake depths.

The depth estimates obtained using the Q model of Dziewonski and Steim (1982) and the excitation functions computed for the average ocean model of Regan and Anderson (1984) are considered most reasonable. Dziewonski and Steim's Q model represents a good global average of Q determined over a period range of the Rayleigh waves used in this study. Since most of the earthquakes studied here occurred in subduction zones Regan and Anderson's average ocean model is considered most appropriate.

Our depth estimates are in general consistent with the Harvard CMT solutions. The centroid depths and their 90 % confidence intervals (numbers in the parentheses) determined by the Student's t test are: Colombia-Ecuador earthquake (12 December 1979), d = 11 km, (9, 24) km; Santa Cruz Is. earthquake (17 July 1980), d = 36 km, (18, 46) km; Samoa earthquake (1 September 1981), d = 15 km, (9, 26) km; Playa Azul, Mexico earthquake (25 October 1981), d = 41 km, (28, 49) km; El Salvador earthquake (19 June 1982), d = 49 km, (41, 55) km; New Ireland earthquake (18 March 1983), d = 75 km, (72, 79) km; Chagos Bank earthquake (30 November 1983), d = 31 km, (16, 41) km; Valparaiso, Chile earthquake (3 March 1985), d = 44 km, (15, 54) km; Michoacan, Mexico earthquake (19 September 1985), d = 24 km, (12, 34) km.

In Chapter 3, the vertical extent of faulting of the 1983 Akita-Oki, and 1977 Sumbawa, Indonesia earthquakes are determined from fundamental and overtone Rayleigh waves. Using fundamental Rayleigh waves, the depths are determined from the moment tensor inversion and fault inversion. The observed overtone Rayleigh waves are compared to the synthetic overtone seismograms to estimate the depth of faulting of these earthquakes. The depths obtained from overtone Rayleigh waves are consistent with the depths determined from fundamental Rayleigh waves for the two earthquakes. Appendix B gives the observed seismograms of fundamental and overtone Rayleigh waves for eleven large earthquakes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A modelling study is performed to investigate the characteristics of both plasma flow and heat transfer of a laminar non-transferred arc argon plasma torch operated at atmospheric and reduced pressure. It is found that the calculated flow fields and temperature distributions are quite similar for both cases at a chamber pressure of 1.0 atm and 0.1 atm. A fully developed flow regime could be achieved in the arc constrictor-tube between the cathode and the anode of the plasma torch at 1.0 atm for all the flow rates covered in this study. However the flow field could not reach the fully developed regime at 0.1 atm with a higher flow rate. The arc-root is always attached to the torch anode surface near the upstream end of the anode, i.e. the abruptly expanded part of the torch channel, which is in consistence with experimental observation. The surrounding gas would be entrained from the torch exit into the torch interior due to a comparatively large inner diameter of the anode channel compared to that of the arc constrictor-tube.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast–all while remaining functional.

This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of “active self-assembly” of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology’s numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules.

One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved.

One might think that because a system is Turing-complete, capable of computing “anything,” that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not “computations” in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface.

Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors “energetically incomplete” programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time.

As we will demonstrate and prove, a sufficiently expressive implementation of an “active” molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the the problem, so the system is not “energetically incomplete.” But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive.

Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly.

We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self- assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics.

We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this re- action over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction.

In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molec- ular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Amorphous metals that form fully glassy parts over a few millimeters in thickness are still relatively new materials. Their glassy structure gives them particularly high strengths, high yield strains, high hardness values, high resilience, and low damping losses, but this can also result in an extremely low tolerance to the presence of flaws in the material. Since this glassy structure lacks the ordered crystal structure, it also lacks the crystalline defect (dislocations) that provides the micromechanism of toughening and flaw insensitivity in conventional metals. Without a sufficient and reliable toughness that results in a large tolerance of damage in the material, metallic glasses will struggle to be adopted commercially. Here, we identify the origin of toughness in metallic glass as the competition between the intrinsic toughening mechanism of shear banding ahead of a crack and crack propagation by the cavitation of the liquid inside the shear bands. We present a detailed study over the first three chapters mainly focusing on the process of shear banding; its crucial role in giving rise to one of the most damage-tolerant materials known, its extreme sensitivity to the configurational state of a glass with moderate toughness, and how the configurational state can be changed with the addition of minor elements. The last chapter is a novel investigation into the cavitation barrier in glass-forming liquids, the competing process to shear banding. The combination of our results represents an increased understanding of the major influences on the fracture toughness of metallic glasses and thus provides a path for the improvement and development of tougher metallic glasses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.