12 resultados para named inventories

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.

First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.

Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.

Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.

Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With data centers being the supporting infrastructure for a wide range of IT services, their efficiency has become a big concern to operators, as well as to society, for both economic and environmental reasons. The goal of this thesis is to design energy-efficient algorithms that reduce energy cost while minimizing compromise to service. We focus on the algorithmic challenges at different levels of energy optimization across the data center stack. The algorithmic challenge at the device level is to improve the energy efficiency of a single computational device via techniques such as job scheduling and speed scaling. We analyze the common speed scaling algorithms in both the worst-case model and stochastic model to answer some fundamental issues in the design of speed scaling algorithms. The algorithmic challenge at the local data center level is to dynamically allocate resources (e.g., servers) and to dispatch the workload in a data center. We develop an online algorithm to make a data center more power-proportional by dynamically adapting the number of active servers. The algorithmic challenge at the global data center level is to dispatch the workload across multiple data centers, considering the geographical diversity of electricity price, availability of renewable energy, and network propagation delay. We propose algorithms to jointly optimize routing and provisioning in an online manner. Motivated by the above online decision problems, we move on to study a general class of online problem named "smoothed online convex optimization", which seeks to minimize the sum of a sequence of convex functions when "smooth" solutions are preferred. This model allows us to bridge different research communities and help us get a more fundamental understanding of general online decision problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ubiquitin-dependent proteolytic pathway plays an important role in a broad array of cellular processes, inducting cell cycle control and transcription. Biochemical analysis of the ubiquitination of Sic1, the B-type cyclin-dependent kinase (CDK) inhibitor in budding yeast helped to define a ubiquitin ligase complex named SCFcdc4 (for Skp1, Cdc53/cullin, F-box protein). We found that besides Sic1, the CDK inhibitor Far1 and the replication initiation protein Cdc6 are also substrates of SCFcdc4 in vitro. A common feature in the ubiquitination of the cell cycle SCFcdc4 substrates is that they must be phosphorylated by the major cell cycle CDK, Cdc28. Gcn4, a transcription activator involved in the general control of amino acid biosynthesis, is rapidly degraded in an SCFcdc4-dependent manner in vivo. We have focused on this substrate to investigate the generality of the SCFcdc4 pathway. Through biochemical fractionations, we found that the Srb10 CDK phosphorylates Gcn4 and thereby marks it for recognition by SCFcdc4 ubiquitin ligase. Srb10 is a physiological regulator of Gcn4 stability because both phosphorylation and turnover of Gcn4 are diminished in srb10 mutants. Furthermore, we found that at least two different CDKs, Pho85 and Srb10, conspire to promote the rapid degradation of Gcn4 in vivo. The multistress response transcriptional regulator Msn2 is also a substrate for Srb10 and is hyperphosphorylated in an Srb10-dependent manner upon heat stress-induced translocation into the nucleus. Whereas Msn2 is cytoplasmic in resting wild type cells, its nuclear exclusion is partially compromised in srb10 mutant cells. Srb10 has been shown to repress a subset of genes in vivo, and has been proposed to inhibit transcription via phosphorylation of the C-terminal domain of RNA polymerase II. Our results suggest a general theme that Srb10 represses the transcription of specific genes by directly antagonizing the transcriptional activators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Earth is very heterogeneous, especially in the region close to the surface of the Earth, and in regions close to the core-mantle boundary (CMB). The lowermost mantle (bottom 300km of the mantle) is the place for fast anomaly (3% faster S velocity than PREM, modeled from Scd), for slow anomaly (-3% slower S velocity than PREM, modeled from S,ScS), for extreme anomalous structure (ultra-low velocity zone, 30% lower inS velocity, 10% lower in P velocity). Strong anomaly with larger dimension is also observed beneath Africa and Pacific, originally modeled from travel time of S, SKS and ScS. Given the heterogeneous nature of the earth, more accurate approach (than travel time) has to be applied to study the details of various anomalous structures, and matching waveform with synthetic seismograms has proven effective in constraining the velocity structures. However, it is difficult to make synthetic seismograms in more than 1D cases where no exact analytical solution is possible. Numerical methods like finite difference or finite elements are too time consuming in modeling body waveforms. We developed a 2D synthetic algorithm, which is extended from 1D generalized ray theory (GRT), to make synthetic seismograms efficiently (each seismogram per minutes). This 2D algorithm is related to WKB approximation, but is based on different principles, it is thus named to be WKM, i.e., WKB modified. WKM has been applied to study the variation of fast D" structure beneath the Caribbean sea, to study the plume beneath Africa. WKM is also applied to study PKP precursors which is a very important seismic phase in modeling lower mantle heterogeneity. By matching WKM synthetic seismograms with various data, we discovered and confirmed that (a) The D" beneath Caribbean varies laterally, and the variation is best revealed with Scd+Sab beyond 88 degree where Sed overruns Sab. (b) The low velocity structure beneath Africa is about 1500 km in height, at least 1000km in width, and features 3% reduced S velocity. The low velocity structure is a combination of a relatively thin, low velocity layer (200 km thick or less) beneath the Atlantic, then rising very sharply into mid mantle towards Africa. (c) At the edges of this huge Africa low velocity structures, ULVZs are found by modeling the large separation between S and ScS beyond 100 degree. The ULVZ to the eastern boundary was discovered with SKPdS data, and later is confirmed by PKP precursor data. This is the first time that ULVZ is verified with distinct seismic phase.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mitochondria can remodel their membranes by fusing or dividing. These processes are required for the proper development and viability of multicellular organisms. At the cellular level, fusion is important for mitochondrial Ca2+ homeostasis, mitochondrial DNA maintenance, mitochondrial membrane potential, and respiration. Mitochondrial division, which is better known as fission, is important for apoptosis, mitophagy, and for the proper allocation of mitochondria to daughter cells during cellular division.

The functions of proteins involved in fission have been best characterized in the yeast model organism Sarccharomyces cerevisiae. Mitochondrial fission in mammals has some similarities. In both systems, a cytosolic dynamin-like protein, called Dnm1 in yeast and Drp1 in mammals, must be recruited to the mitochondrial surface and polymerized to promote membrane division. Recruitment of yeast Dnm1 requires only one mitochondrial outer membrane protein, named Fis1. Fis1 is conserved in mammals, but its importance for Drp1 recruitment is minor. In mammals, three other receptor proteins—Mff, MiD49, and MiD51—play a major role in recruiting Drp1 to mitochondria. Why mammals require three additional receptors, and whether they function together or separately, are fundamental questions for understanding the mechanism of mitochondrial fission in mammals.

We have determined that Mff, MiD49, or MiD51 can function independently of one another to recruit Drp1 to mitochondria. Fis1 plays a minor role in Drp1 recruitment, suggesting that the emergence of these additional receptors has replaced the system used by yeast. Additionally, we found that Fis1/Mff and the MiDs regulate Drp1 activity differentially. Fis1 and Mff promote constitutive mitochondrial fission, whereas the MiDs activate recruited Drp1 only during loss of respiration.

To better understand the function of the MiDs, we have determined the atomic structure of the cytoplasmic domain of MiD51, and performed a structure-function analysis of MiD49 based on its homology to MiD51. MiD51 adopts a nucleotidyl transferase fold, and binds ADP as a co-factor that is essential for its function. Both MiDs contain a loop segment that is not present in other nucleotidyl transferase proteins, and this loop is used to interact with Drp1 and to recruit it to mitochondria.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Understanding the origin of life on Earth has long fascinated the minds of the global community, and has been a driving factor in interdisciplinary research for centuries. Beyond the pioneering work of Darwin, perhaps the most widely known study in the last century is that of Miller and Urey, who examined the possibility of the formation of prebiotic chemical precursors on the primordial Earth [1]. More recent studies have shown that amino acids, the chemical building blocks of the biopolymers that comprise life as we know it on Earth, are present in meteoritic samples, and that the molecules extracted from the meteorites display isotopic signatures indicative of an extraterrestrial origin [2]. The most recent major discovery in this area has been the detection of glycine (NH2CH2COOH), the simplest amino acid, in pristine cometary samples returned by the NASA STARDUST mission [3]. Indeed, the open questions left by these discoveries, both in the public and scientific communities, hold such fascination that NASA has designated the understanding of our "Cosmic Origins" as a key mission priority.

Despite these exciting discoveries, our understanding of the chemical and physical pathways to the formation of prebiotic molecules is woefully incomplete. This is largely because we do not yet fully understand how the interplay between grain-surface and sub-surface ice reactions and the gas-phase affects astrophysical chemical evolution, and our knowledge of chemical inventories in these regions is incomplete. The research presented here aims to directly address both these issues, so that future work to understand the formation of prebiotic molecules has a solid foundation from which to work.

From an observational standpoint, a dedicated campaign to identify hydroxylamine (NH2OH), potentially a direct precursor to glycine, in the gas-phase was undertaken. No trace of NH2OH was found. These observations motivated a refinement of the chemical models of glycine formation, and have largely ruled out a gas-phase route to the synthesis of the simplest amino acid in the ISM. A molecular mystery in the case of the carrier of a series of transitions was resolved using observational data toward a large number of sources, confirming the identity of this important carbon-chemistry intermediate B11244 as l-C3H+ and identifying it in at least two new environments. Finally, the doubly-nitrogenated molecule carbodiimide HNCNH was identified in the ISM for the first time through maser emission features in the centimeter-wavelength regime.

In the laboratory, a TeraHertz Time-Domain Spectrometer was constructed to obtain the experimental spectra necessary to search for solid-phase species in the ISM in the THz region of the spectrum. These investigations have shown a striking dependence on large-scale, long-range (i.e. lattice) structure of the ices on the spectra they present in the THz. A database of molecular spectra has been started, and both the simplest and most abundant ice species, which have already been identified, as well as a number of more complex species, have been studied. The exquisite sensitivity of the THz spectra to both the structure and thermal history of these ices may lead to better probes of complex chemical and dynamical evolution in interstellar environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The yeast Saccharomyces cerevisiae contains a family of hsp70 related genes. One member of this family, SSA1, encodes a 70kD heat-shock protein which in addition to its heat inducible expression has a significant basal level of expression. The first 500 bp upstream of the SSA1 start point of transcription was examined by DNAse I protection analysis. The results reveal the presence of at least 14 factor binding sites throughout the upstream promoter region. The function of these binding sites has been examined using a series of 5' promoter deletions fused to the recorder gene lacZ in a centromere-containing yeast shuttle vector. The following sites have been identified in the promoter and their activity in yeast determined individually with a centromere-based recorder plasmid containing a truncated CYC1 /lacZ fusion: a heat-shock element or HSE which is sufficient to convey heat-shock response on the recorder plasmid; a homology to the SV40 'core' sequence which can repress the GCN4 recognition element (GCRE) and the yAP1 recognition element (ARE), and has been designated a upstream repression element or URE; a 'G'-rich region named G-box which can also convey heatshock response on the recorder plasmid; and a purine-pyrimidine alternating sequence name GT-box which is an activator of transcription. A series of fusion constructs were made to identify a putative silencer-like element upstream of SSA1. This element is position dependent and has been localized to a region containing both an ABF1 binding site and a RAP1 binding site. Five site-specific DNA-binding factors are identified and their purification is presented: the heat-shock transcription factor or HSTF, which recognizes the HSE; the G-box binding factor or GBF; the URE recognition factor or URF; the GT-box binding factor; and the GC-box binding factor or yeast Sp1.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

High-resolution orbital and in situ observations acquired of the Martian surface during the past two decades provide the opportunity to study the rock record of Mars at an unprecedented level of detail. This dissertation consists of four studies whose common goal is to establish new standards for the quantitative analysis of visible and near-infrared data from the surface of Mars. Through the compilation of global image inventories, application of stratigraphic and sedimentologic statistical methods, and use of laboratory analogs, this dissertation provides insight into the history of past depositional and diagenetic processes on Mars. The first study presents a global inventory of stratified deposits observed in images from the High Resolution Image Science Experiment (HiRISE) camera on-board the Mars Reconnaissance Orbiter. This work uses the widespread coverage of high-resolution orbital images to make global-scale observations about the processes controlling sediment transport and deposition on Mars. The next chapter presents a study of bed thickness distributions in Martian sedimentary deposits, showing how statistical methods can be used to establish quantitative criteria for evaluating the depositional history of stratified deposits observed in orbital images. The third study tests the ability of spectral mixing models to obtain quantitative mineral abundances from near-infrared reflectance spectra of clay and sulfate mixtures in the laboratory for application to the analysis of orbital spectra of sedimentary deposits on Mars. The final study employs a statistical analysis of the size, shape, and distribution of nodules observed by the Mars Science Laboratory Curiosity rover team in the Sheepbed mudstone at Yellowknife Bay in Gale crater. This analysis is used to evaluate hypotheses for nodule formation and to gain insight into the diagenetic history of an ancient habitable environment on Mars.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wide field-of-view (FOV) microscopy is of high importance to biological research and clinical diagnosis where a high-throughput screening of samples is needed. This thesis presents the development of several novel wide FOV imaging technologies and demonstrates their capabilities in longitudinal imaging of living organisms, on the scale of viral plaques to live cells and tissues.

The ePetri Dish is a wide FOV on-chip bright-field microscope. Here we applied an ePetri platform for plaque analysis of murine norovirus 1 (MNV-1). The ePetri offers the ability to dynamically track plaques at the individual cell death event level over a wide FOV of 6 mm × 4 mm at 30 min intervals. A density-based clustering algorithm is used to analyze the spatial-temporal distribution of cell death events to identify plaques at their earliest stages. We also demonstrate the capabilities of the ePetri in viral titer count and dynamically monitoring plaque formation, growth, and the influence of antiviral drugs.

We developed another wide FOV imaging technique, the Talbot microscope, for the fluorescence imaging of live cells. The Talbot microscope takes advantage of the Talbot effect and can generate a focal spot array to scan the fluorescence samples directly on-chip. It has a resolution of 1.2 μm and a FOV of ~13 mm2. We further upgraded the Talbot microscope for the long-term time-lapse fluorescence imaging of live cell cultures, and analyzed the cells’ dynamic response to an anticancer drug.

We present two wide FOV endoscopes for tissue imaging, named the AnCam and the PanCam. The AnCam is based on the contact image sensor (CIS) technology, and can scan the whole anal canal within 10 seconds with a resolution of 89 μm, a maximum FOV of 100 mm × 120 mm, and a depth-of-field (DOF) of 0.65 mm. We also demonstrate the performance of the AnCam in whole anal canal imaging in both animal models and real patients. In addition to this, the PanCam is based on a smartphone platform integrated with a panoramic annular lens (PAL), and can capture a FOV of 18 mm × 120 mm in a single shot with a resolution of 100─140 μm. In this work we demonstrate the PanCam’s performance in imaging a stained tissue sample.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Politically the Colorado river is an interstate as well as an international stream. Physically the basin divides itself distinctly into three sections. The upper section from head waters to the mouth of San Juan comprises about 40 percent of the total of the basin and affords about 87 percent of the total runoff, or an average of about 15 000 000 acre feet per annum. High mountains and cold weather are found in this section. The middle section from the mouth of San Juan to the mouth of the Williams comprises about 35 percent of the total area of the basin and supplies about 7 percent of the annual runoff. Narrow canyons and mild weather prevail in this section. The lower third of the basin is composed of mainly hot arid plains of low altitude. It comprises some 25 percent of the total area of the basin and furnishes about 6 percent of the average annual runoff.

The proposed Diamond Creek reservoir is located in the middle section and is wholly within the boundary of Arizona. The site is at the mouth of Diamond Creek and is only 16 m. from Beach Spring, a station on the Santa Fe railroad. It is solely a power project with a limited storage capacity. The dam which creats the reservoir is of the gravity type to be constructed across the river. The walls and foundation are of granite. For a dam of 290 feet in height, the back water will be about 25 m. up the river.

The power house will be placed right below the dam perpendicular to the axis of the river. It is entirely a concrete structure. The power installation would consist of eighteen 37 500 H.P. vertical, variable head turbines, directly connected to 28 000 kwa. 110 000 v. 3 phase, 60 cycle generators with necessary switching and auxiliary apparatus. Each unit is to be fed by a separate penstock wholly embedded into the masonry.

Concerning the power market, the main electric transmission lines would extend to Prescott, Phoenix, Mesa, Florence etc. The mining regions of the mountains of Arizona would be the most adequate market. The demand of power in the above named places might not be large at present. It will, from the observation of the writer, rapidly increase with the wonderful advancement of all kinds of industrial development.

All these things being comparatively feasible, there is one difficult problem: that is the silt. At the Diamond Creek dam site the average annual silt discharge is about 82 650 acre feet. The geographical conditions, however, will not permit silt deposites right in the reservoir. So this design will be made under the assumption given in Section 4.

The silt condition and the change of lower course of the Colorado are much like those of the Yellow River in China. But one thing is different. On the Colorado most of the canyon walls are of granite, while those on the Yellow are of alluvial loess: so it is very hard, if not impossible, to get a favorable dam site on the lower part. As a visitor to this country, I should like to see the full development of the Colorado: but how about THE YELLOW!

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work presents the development and investigation of a new type of concrete for the attenuation of waves induced by dynamic excitation. Recent progress in the field of metamaterials science has led to a range of novel composites which display unusual properties when interacting with electromagnetic, acoustic, and elastic waves. A new structural metamaterial with enhanced properties for dynamic loading applications is presented, which is named metaconcrete. In this new composite material the standard stone and gravel aggregates of regular concrete are replaced with spherical engineered inclusions. Each metaconcrete aggregate has a layered structure, consisting of a heavy core and a thin compliant outer coating. This structure allows for resonance at or near the eigenfrequencies of the inclusions, and the aggregates can be tuned so that resonant oscillations will be activated by particular frequencies of an applied dynamic loading. The activation of resonance within the aggregates causes the overall system to exhibit negative effective mass, which leads to attenuation of the applied wave motion. To investigate the behavior of metaconcrete slabs under a variety of different loading conditions a finite element slab model containing a periodic array of aggregates is utilized. The frequency dependent nature of metaconcrete is investigated by considering the transmission of wave energy through a slab, which indicates the presence of large attenuation bands near the resonant frequencies of the aggregates. Applying a blast wave loading to both an elastic slab and a slab model that incorporates the fracture characteristics of the mortar matrix reveals that a significant portion of the supplied energy can be absorbed by aggregates which are activated by the chosen blast wave profile. The transfer of energy from the mortar matrix to the metaconcrete aggregates leads to a significant reduction in the maximum longitudinal stress, greatly improving the ability of the material to resist damage induced by a propagating shock wave. The various analyses presented in this work provide the theoretical and numerical background necessary for the informed design and development of metaconcrete aggregates for dynamic loading applications, such as blast shielding, impact protection, and seismic mitigation.