981 resultados para Log cabins.
Resumo:
The calculation of settling speed of coarse particles is firstly addressed, with accelerated Stokesian dynamics without adjustable parameters, in which far field force acting on the particle instead of particle velocity is chosen as dependent variables to consider inter-particle hydrodynamic interactions. The sedimentation of a simple cubic array of spherical particles is simulated and compared to the results available to verify and validate the numerical code and computational scheme. The improvedmethod keeps the same computational cost of the order O(N log N) as usual accelerated Stokesian dynamics does. Then, more realistic random suspension sedimentation is investigated with the help ofMont Carlo method. The computational results agree well with experimental fitting. Finally, the sedimentation of finer cohesive particle, which is often observed in estuary environment, is presented as a further application in coastal engineering.
Resumo:
Part I.
We have developed a technique for measuring the depth time history of rigid body penetration into brittle materials (hard rocks and concretes) under a deceleration of ~ 105 g. The technique includes bar-coded projectile, sabot-projectile separation, detection and recording systems. Because the technique can give very dense data on penetration depth time history, penetration velocity can be deduced. Error analysis shows that the technique has a small intrinsic error of ~ 3-4 % in time during penetration, and 0.3 to 0.7 mm in penetration depth. A series of 4140 steel projectile penetration into G-mixture mortar targets have been conducted using the Caltech 40 mm gas/ powder gun in the velocity range of 100 to 500 m/s.
We report, for the first time, the whole depth-time history of rigid body penetration into brittle materials (the G-mixture mortar) under 105 g deceleration. Based on the experimental results, including penetration depth time history, damage of recovered target and projectile materials and theoretical analysis, we find:
1. Target materials are damaged via compacting in the region in front of a projectile and via brittle radial and lateral crack propagation in the region surrounding the penetration path. The results suggest that expected cracks in front of penetrators may be stopped by a comminuted region that is induced by wave propagation. Aggregate erosion on the projectile lateral surface is < 20% of the final penetration depth. This result suggests that the effect of lateral friction on the penetration process can be ignored.
2. Final penetration depth, Pmax, is linearly scaled with initial projectile energy per unit cross-section area, es , when targets are intact after impact. Based on the experimental data on the mortar targets, the relation is Pmax(mm) 1.15es (J/mm2 ) + 16.39.
3. Estimation of the energy needed to create an unit penetration volume suggests that the average pressure acting on the target material during penetration is ~ 10 to 20 times higher than the unconfined strength of target materials under quasi-static loading, and 3 to 4 times higher than the possible highest pressure due to friction and material strength and its rate dependence. In addition, the experimental data show that the interaction between cracks and the target free surface significantly affects the penetration process.
4. Based on the fact that the penetration duration, tmax, increases slowly with es and does not depend on projectile radius approximately, the dependence of tmax on projectile length is suggested to be described by tmax(μs) = 2.08es (J/mm2 + 349.0 x m/(πR2), in which m is the projectile mass in grams and R is the projectile radius in mm. The prediction from this relation is in reasonable agreement with the experimental data for different projectile lengths.
5. Deduced penetration velocity time histories suggest that whole penetration history is divided into three stages: (1) An initial stage in which the projectile velocity change is small due to very small contact area between the projectile and target materials; (2) A steady penetration stage in which projectile velocity continues to decrease smoothly; (3) A penetration stop stage in which projectile deceleration jumps up when velocities are close to a critical value of ~ 35 m/s.
6. Deduced averaged deceleration, a, in the steady penetration stage for projectiles with same dimensions is found to be a(g) = 192.4v + 1.89 x 104, where v is initial projectile velocity in m/s. The average pressure acting on target materials during penetration is estimated to be very comparable to shock wave pressure.
7. A similarity of penetration process is found to be described by a relation between normalized penetration depth, P/Pmax, and normalized penetration time, t/tmax, as P/Pmax = f(t/tmax, where f is a function of t/tmax. After f(t/tmax is determined using experimental data for projectiles with 150 mm length, the penetration depth time history for projectiles with 100 mm length predicted by this relation is in good agreement with experimental data. This similarity also predicts that average deceleration increases with decreasing projectile length, that is verified by the experimental data.
8. Based on the penetration process analysis and the present data, a first principle model for rigid body penetration is suggested. The model incorporates the models for contact area between projectile and target materials, friction coefficient, penetration stop criterion, and normal stress on the projectile surface. The most important assumptions used in the model are: (1) The penetration process can be treated as a series of impact events, therefore, pressure normal to projectile surface is estimated using the Hugoniot relation of target material; (2) The necessary condition for penetration is that the pressure acting on target materials is not lower than the Hugoniot elastic limit; (3) The friction force on projectile lateral surface can be ignored due to cavitation during penetration. All the parameters involved in the model are determined based on independent experimental data. The penetration depth time histories predicted from the model are in good agreement with the experimental data.
9. Based on planar impact and previous quasi-static experimental data, the strain rate dependence of the mortar compressive strength is described by σf/σ0f = exp(0.0905(log(έ/έ_0) 1.14, in the strain rate range of 10-7/s to 103/s (σ0f and έ are reference compressive strength and strain rate, respectively). The non-dispersive Hugoniot elastic wave in the G-mixture has an amplitude of ~ 0.14 GPa and a velocity of ~ 4.3 km/s.
Part II.
Stress wave profiles in vitreous GeO2 were measured using piezoresistance gauges in the pressure range of 5 to 18 GPa under planar plate and spherical projectile impact. Experimental data show that the response of vitreous GeO2 to planar shock loading can be divided into three stages: (1) A ramp elastic precursor has peak amplitude of 4 GPa and peak particle velocity of 333 m/s. Wave velocity decreases from initial longitudinal elastic wave velocity of 3.5 km/s to 2.9 km/s at 4 GPa; (2) A ramp wave with amplitude of 2.11 GPa follows the precursor when peak loading pressure is 8.4 GPa. Wave velocity drops to the value below bulk wave velocity in this stage; (3) A shock wave achieving final shock state forms when peak pressure is > 6 GPa. The Hugoniot relation is D = 0.917 + 1.711u (km/s) using present data and the data of Jackson and Ahrens [1979] when shock wave pressure is between 6 and 40 GPa for ρ0 = 3.655 gj cm3 . Based on the present data, the phase change from 4-fold to 6-fold coordination of Ge+4 with O-2 in vitreous GeO2 occurs in the pressure range of 4 to 15 ± 1 GPa under planar shock loading. Comparison of the shock loading data for fused SiO2 to that on vitreous GeO2 demonstrates that transformation to the rutile structure in both media are similar. The Hugoniots of vitreous GeO2 and fused SiO2 are found to coincide approximately if pressure in fused SiO2 is scaled by the ratio of fused SiO2to vitreous GeO2 density. This result, as well as the same structure, provides the basis for considering vitreous Ge02 as an analogous material to fused SiO2 under shock loading. Experimental results from the spherical projectile impact demonstrate: (1) The supported elastic shock in fused SiO2 decays less rapidly than a linear elastic wave when elastic wave stress amplitude is higher than 4 GPa. The supported elastic shock in vitreous GeO2 decays faster than a linear elastic wave; (2) In vitreous GeO2 , unsupported shock waves decays with peak pressure in the phase transition range (4-15 GPa) with propagation distance, x, as α 1/x-3.35 , close to the prediction of Chen et al. [1998]. Based on a simple analysis on spherical wave propagation, we find that the different decay rates of a spherical elastic wave in fused SiO2 and vitreous GeO2 is predictable on the base of the compressibility variation with stress under one-dimensional strain condition in the two materials.
The intergalactic and circumgalactic medium surrounding star-forming galaxies at redshifts 2 < z < 3
Resumo:
We present measurements of the spatial distribution, kinematics, and physical properties of gas in the circumgalactic medium (CGM) of 2.0<z<2.8 UV color-selected galaxies as well as within the 2<z<3 intergalactic medium (IGM). These measurements are derived from Voigt profile decomposition of the full Lyα and Lyβ forest in 15 high-resolution, high signal-to-noise ratio QSO spectra resulting in a catalog of ∼6000 HI absorbers.
Chapter 2 of this thesis focuses on HI surrounding high-z star-forming galaxies drawn from the Keck Baryonic Structure Survey (KBSS). The KBSS is a unique spectroscopic survey of the distant universe designed to explore the details of the connection between galaxies and intergalactic baryons within the same survey volumes. The KBSS combines high-quality background QSO spectroscopy with large densely-sampled galaxy redshift surveys to probe the CGM at scales of ∼50 kpc to a few Mpc. Based on these data, Chapter 2 presents the first quantitative measurements of the distribution, column density, kinematics, and absorber line widths of neutral hydrogen surrounding high-z star-forming galaxies.
Chapter 3 focuses on the thermal properties of the diffuse IGM. This analysis relies on measurements of the ∼6000 absorber line widths to constrain the thermal and turbulent velocities of absorbing "clouds." A positive correlation between the column density of HI and the minimum line width is recovered and implies a temperature-density relation within the low-density IGM for which higher-density regions are hotter, as is predicted by simple theoretical arguments.
Chapter 4 presents new measurements of the opacity of the IGM and CGM to hydrogen-ionizing photons. The chapter begins with a revised measurement of the HI column density distribution based on this new absorption line catalog that, due to the inclusion of high-order Lyman lines, provides the first statistically robust measurement of the frequency of absorbers with HI column densities 14 ≲ log(NHI/cm-2) ≲ 17.2. Also presented are the first measurements of the column density distribution of HI within the CGM (50 <d < 300 pkpc) of high-z galaxies. These distributions are used to calculate the total opacity of the IGM and IGM+CGM and to revise previous measurements of the mean free path of hydrogen-ionizing photons within the IGM. This chapter also considers the effect of the surrounding CGM on the transmission of ionizing photons out of the sites of active star-formation and into the IGM.
This thesis concludes with a brief discussion of work in progress focused on understanding the distribution of metals within the CGM of KBSS galaxies. Appendix B discusses my contributions to the MOSFIRE instrumentation project.
Resumo:
Studies in turbulence often focus on two flow conditions, both of which occur frequently in real-world flows and are sought-after for their value in advancing turbulence theory. These are the high Reynolds number regime and the effect of wall surface roughness. In this dissertation, a Large-Eddy Simulation (LES) recreates both conditions over a wide range of Reynolds numbers Reτ = O(102)-O(108) and accounts for roughness by locally modeling the statistical effects of near-wall anisotropic fine scales in a thin layer immediately above the rough surface. A subgrid, roughness-corrected wall model is introduced to dynamically transmit this modeled information from the wall to the outer LES, which uses a stretched-vortex subgrid-scale model operating in the bulk of the flow. Of primary interest is the Reynolds number and roughness dependence of these flows in terms of first and second order statistics. The LES is first applied to a fully turbulent uniformly-smooth/rough channel flow to capture the flow dynamics over smooth, transitionally rough and fully rough regimes. Results include a Moody-like diagram for the wall averaged friction factor, believed to be the first of its kind obtained from LES. Confirmation is found for experimentally observed logarithmic behavior in the normalized stream-wise turbulent intensities. Tight logarithmic collapse, scaled on the wall friction velocity, is found for smooth-wall flows when Reτ ≥ O(106) and in fully rough cases. Since the wall model operates locally and dynamically, the framework is used to investigate non-uniform roughness distribution cases in a channel, where the flow adjustments to sudden surface changes are investigated. Recovery of mean quantities and turbulent statistics after transitions are discussed qualitatively and quantitatively at various roughness and Reynolds number levels. The internal boundary layer, which is defined as the border between the flow affected by the new surface condition and the unaffected part, is computed, and a collapse of the profiles on a length scale containing the logarithm of friction Reynolds number is presented. Finally, we turn to the possibility of expanding the present framework to accommodate more general geometries. As a first step, the whole LES framework is modified for use in the curvilinear geometry of a fully-developed turbulent pipe flow, with implementation carried out in a spectral element solver capable of handling complex wall profiles. The friction factors have shown favorable agreement with the superpipe data, and the LES estimates of the Karman constant and additive constant of the log-law closely match values obtained from experiment.
Resumo:
Curve samplers are sampling algorithms that proceed by viewing the domain as a vector space over a finite field, and randomly picking a low-degree curve in it as the sample. Curve samplers exhibit a nice property besides the sampling property: the restriction of low-degree polynomials over the domain to the sampled curve is still low-degree. This property is often used in combination with the sampling property and has found many applications, including PCP constructions, local decoding of codes, and algebraic PRG constructions.
The randomness complexity of curve samplers is a crucial parameter for its applications. It is known that (non-explicit) curve samplers using O(log N + log(1/δ)) random bits exist, where N is the domain size and δ is the confidence error. The question of explicitly constructing randomness-efficient curve samplers was first raised in [TU06] where they obtained curve samplers with near-optimal randomness complexity.
In this thesis, we present an explicit construction of low-degree curve samplers with optimal randomness complexity (up to a constant factor) that sample curves of degree (m logq(1/δ))O(1) in Fqm. Our construction is a delicate combination of several components, including extractor machinery, limited independence, iterated sampling, and list-recoverable codes.
Resumo:
The dependence of the maximum and average energies of protons, which were produced in the interaction of an intense laser pulse (similar to 1 x 10(16) W cm(-2), 65 fs) with hydrogen clusters in a gas jet backed up to 80 bar at liquid nitrogen temperature (similar to 80 K), on the backing pressure has been studied. The general trend of the proton energy dependence on the square of the average cluster radius, which is determined by a calibrated Rayleigh scattering measurement, is similar to that described by theory under the single size approximation. Calculations are made to fit the experimental results under a simplified model by taking into account both a log-normal cluster size distribution and the laser intensity attenuation in the interaction volume. A very good agreement between the experimental proton energy spectra and the calculations is obtained in the high- energy part of the proton energy distributions, but a discrepancy of the fits is revealed in the low-energy part at higher backing pressures which are associated with denser flows. A possible mechanism which would be responsible for this discrepancy is discussed. Finally, from the fits, a variation of the cluster size distributions was revealed to be dependent on the gas backing pressure as well as on the evolving time of the gas flow of clusters.
Resumo:
O objetivo deste trabalho é determinar o tempo decorrido e fatores relacionados a sobrevida, a partir do diagnóstico da AIDS dos pacientes atendidos no Centro de Pesquisa Hospital Evandro Chagas (CPqHEC). Comparar a sobrevida segundo os critérios definição de caso de AIDS estabelecidos pelo Centro de controle de doenças e prevenção dos EUA (CDC) em 1987 e 1993 e pelo Ministério da Saúde Brasil em 1998. De um total de 1591 indivíduos com sorologia positiva para HIV, cadastrados entre 1986 a 1999, foi selecionada uma amostra aleatória sistemática com 392 indivíduos, sendo identificados 193 casos de AIDS pelo critério CDC 1993. A sobrevida foi considerada como o tempo decorrido da data do diagnóstico da AIDS ao óbito (falha), sendo a censura definida para os pacientes com perda de seguimento ou que permaneceram vivos até dezembro de 2000, com a data da censura igual a data do último atendimento. A duração da sobrevida foi descrita através do método de Kaplan-Meier, sendo comparadas as funções de sobrevida das categorias das variáveis pelo teste log-rank. Um modelo com os co-fatores de maior relevância na sobrevida foi ajustado, utilizando-se o modelo dos riscos proporcionais de Cox. Dos 193 pacientes com AIDS, 92 (47,7%) morreram, 21 (10,9%) abandonaram o tratamento, e 80 (41,7%) permaneceram vivos até o fim do estudo. Encontramos sobrevida relativamente alta nos três critérios de definição de caso avaliados, em parte explicada pelos casos serem procedentes de um único hospital de pesquisa, com alto grau de conhecimento na condução da doença. A profilaxia primária para PPC foi preditor de melhor sobrevida. Casos com AIDS definida por herpes zoster apresentaram melhor prognóstico e os definidos por duas doenças o pior prognóstico.
Resumo:
Most space applications require deployable structures due to the limiting size of current launch vehicles. Specifically, payloads in nanosatellites such as CubeSats require very high compaction ratios due to the very limited space available in this typo of platform. Strain-energy-storing deployable structures can be suitable for these applications, but the curvature to which these structures can be folded is limited to the elastic range. Thanks to fiber microbuckling, high-strain composite materials can be folded into much higher curvatures without showing significant damage, which makes them suitable for very high compaction deployable structure applications. However, in applications that require carrying loads in compression, fiber microbuckling also dominates the strength of the material. A good understanding of the strength in compression of high-strain composites is then needed to determine how suitable they are for this type of application.
The goal of this thesis is to investigate, experimentally and numerically, the microbuckling in compression of high-strain composites. Particularly, the behavior in compression of unidirectional carbon fiber reinforced silicone rods (CFRS) is studied. Experimental testing of the compression failure of CFRS rods showed a higher strength in compression than the strength estimated by analytical models, which is unusual in standard polymer composites. This effect, first discovered in the present research, was attributed to the variation in random carbon fiber angles respect to the nominal direction. This is an important effect, as it implies that microbuckling strength might be increased by controlling the fiber angles. With a higher microbuckling strength, high-strain materials could carry loads in compression without reaching microbuckling and therefore be suitable for several space applications.
A finite element model was developed to predict the homogenized stiffness of the CFRS, and the homogenization results were used in another finite element model that simulated a homogenized rod under axial compression. A statistical representation of the fiber angles was implemented in the model. The presence of fiber angles increased the longitudinal shear stiffness of the material, resulting in a higher strength in compression. The simulations showed a large increase of the strength in compression for lower values of the standard deviation of the fiber angle, and a slight decrease of strength in compression for lower values of the mean fiber angle. The strength observed in the experiments was achieved with the minimum local angle standard deviation observed in the CFRS rods, whereas the shear stiffness measured in torsion tests was achieved with the overall fiber angle distribution observed in the CFRS rods.
High strain composites exhibit good bending capabilities, but they tend to be soft out-of-plane. To achieve a higher out-of-plane stiffness, the concept of dual-matrix composites is introduced. Dual-matrix composites are foldable composites which are soft in the crease regions and stiff elsewhere. Previous attempts to fabricate continuous dual-matrix fiber composite shells had limited performance due to excessive resin flow and matrix mixing. An alternative method, presented in this thesis uses UV-cure silicone and fiberglass to avoid these problems. Preliminary experiments on the effect of folding on the out-of-plane stiffness are presented. An application to a conical log-periodic antenna for CubeSats is proposed, using origami-inspired stowing schemes, that allow a conical dual-matrix composite shell to reach very high compaction ratios.
Resumo:
A exposição materna durante o período gestacional a uma dieta restrita em proteínas (LP) prejudica o desenvolvimento do pâncreas endócrino em sua prole e aumenta a susceptibilidade à hipertensão, diabetes e obesidade na vida adulta. Há evidências de que esse fenômeno pode persistir em gerações subsequentes. Objetivou-se avaliar o efeito da restrição proteica sobre o metabolismo da glicose e morfometria pancreática na prole F3 de camundongos ao nascimento e ao desmame. Para tanto, fêmeas virgens de camundongos Suíços (F0) foram acasaladas e receberam dieta normo-proteica (19% de proteína - NP) ou uma dieta isocalórica restrita em proteínas (5% de proteína - LP) durante toda a gravidez. Durante a lactação e o restante do experimento, todos os grupos receberam a dieta NP. Os filhotes machos foram nomeados F1 (NP1 e LP1). As fêmeas F1 e F2 foram acasaladas para produzir F2 e F3 (NP2, LP2, NP3 e LP3), respectivamente. Semanalmente, os filhotes foram pesados e calculada a taxa de crescimento alométrico (log [massa corporal] = log a + log b [idade]). Os animais foram sacrificados nos dias 1 e 21 de idade, a glicemia foi determinada e o pâncreas retirado, pesado e analisado por estereologia e imunofluorescência; a insulina foi mensurada aos 21 dias. Como resultados, os filhotes restritos na primeira geração (LP1) foram menores ao nascer, mas apresentaram um crescimento acelerado nos primeiros sete dias de vida, mostrando catch-up com os controles; a prole LP2 demonstrou a maior massa corporal ao nascimento e tiveram uma taxa de crescimento mais lenta durante a lactação; não houve diferença na massa corporal e na taxa de crescimento na geração F3. A massa de pâncreas foi diminuída em LP1-LP3 ao nascimento, contudo foi aumentada em LP2 ao desmame. A densidade de volume e o diâmetro das ilhotas foram menores em todos os grupos restritos no dia 1 e 21, somente LP1 teve o menor número de ilhotas. Ao nascer, a massa de células beta foi menor em LP1-LP3 e permaneceu baixa durante a lactação. No dia 1 e 21, os filhotes foram normoglicêmicos, entretanto foram hipoinsulinêmicos ao desmame. Portanto, a restrição de proteínas em camundongos durante a gestação produz alterações morfológicas nas ilhotas pancreáticas, sugerindo que a homeostase da glicose foi mantida por um aumento da sensibilidade à insulina durante os primeiros estágios de vida na prole ao longo de três gerações consecutivas.
Resumo:
A decline in the abundance of blackback flounders, together with the withdrawal of vessels from this fishery, has resulted in a lowered catch in recent years compared to the peak period 1928 through 1931. Data obtained from U. S. Fish and Wildlife Service Hatchery catch records and from fishermen's log book records show a drop in abundance of 63 per cent from the early 1930's to the present in the Boothbay Harbor region and of 31 to 40 per cent in the area south of Cape Cod. Information on the early life history and distribution of young blackback flounders and the size and age composition and distribution of fish subject to the commercial and sport fisheries indicates that the young are the product of local spawning and that the sport and commercial fisheries draw on a resident stock of primarily adult fish.
Resumo:
Mean velocity profiles were measured in the 5” x 60” wind channel of the turbulence laboratory at the GALCIT, by the use of a hot-wire anemometer. The repeatability of results was established, and the accuracy of the instrumentation estimated. Scatter of experimental results is a little, if any, beyond this limit, although some effects might be expected to arise from variations in atmospheric humidity, no account of this factor having been taken in the present work. Also, slight unsteadiness in flow conditions will be responsible for some scatter.
Irregularities of a hot-wire in close proximity to a solid boundary at low speeds were observed, as have already been found by others.
That Kármán’s logarithmic law holds reasonably well over the main part of a fully developed turbulent flow was checked, the equation u/ut = 6.0 + 6.25 log10 yut/v being obtained, and, as has been previously the case, the experimental points do not quite form one straight line in the region where viscosity effects are small. The values of the constants for this law for the best over-all agreement were determined and compared with those obtained by others.
The range of Reynolds numbers used (based on half-width of channel) was from 20,000 to 60,000.
Resumo:
No decorrer das últimas décadas a pesquisa relacionada à contaminação de organismos marinhos por compostos organoclorados (OCs) se intensificou aliada à utilização de algumas espécies como sentinelas da qualidade ambiental quanto aos poluentes orgânicos. Dentre essas espécies, podem-se destacar os cetáceos, animais que entre outras características possuem grande longevidade, alta porcentagem lipídica em seus tecidos e são predadores de topo de cadeia trófica, tendendo assim a acumular altos níveis de OCs em seus tecidos. O presente estudo teve por objetivo determinar as concentrações de OCs de origem industrial e agrícola (PCBs, HCB e DDTs) em tecido hepático de oito diferentes espécies de cetáceos delfinídeos pertencentes a três distintas áreas oceânicas do Estado do Rio de Janeiro, são elas a região costeira, a plataforma continental e a região oceânica. A determinação foi realizada em cromatógrafo a gás (GC - Agilent 6890) conectado a um espectrômetro de massa (MS - Agilent 5973). Os valores de DDTs (1263617272 ng.g -1 lip.) e PCBs (7648877288 ng.g -1 lip.) aqui encontrados estão entre os mais altos já reportados para o táxon. Em todas as áreas observou-se uma predominância do ΣPCB, seguida do ΣDDT e HCB, em níveis que refletem o caráter fortemente industrial da região analisada. Entre os PCBs, a maior contribuição advém dos hexabifenis, seguida dos hepta e pentabifenis, sendo os congêneres 153, 138 e 180 os principais em todas as áreas. A razão p,pDDE/ΣDp,pDDT foi alta em todas as regiões (0,9), refletindo um input antigo do poluente na área. Foram realizadas correlações entre as concentrações de OCs e os parâmetros biológicos das espécies, como idade, sexo e comprimento total. A transferência placentária de OCs foi analisada em dois pares de fêmea-feto de Sotalia guianensis, mostrando uma maior transferência dos compostos com menor log Kow. Como esperado, foi encontrada uma diferença significativa no perfil de contaminação entre as espécies das diferentes regiões, relacionada à proximidade da fonte, características espécie-específicas e ao arranjo trófico das espécies.
Resumo:
In four chapters various aspects of earthquake source are studied.
Chapter I
Surface displacements that followed the Parkfield, 1966, earthquakes were measured for two years with six small-scale geodetic networks straddling the fault trace. The logarithmic rate and the periodic nature of the creep displacement recorded on a strain meter made it possible to predict creep episodes on the San Andreas fault. Some individual earthquakes were related directly to surface displacement, while in general, slow creep and aftershock activity were found to occur independently. The Parkfield earthquake is interpreted as a buried dislocation.
Chapter II
The source parameters of earthquakes between magnitude 1 and 6 were studied using field observations, fault plane solutions, and surface wave and S-wave spectral analysis. The seismic moment, MO, was found to be related to local magnitude, ML, by log MO = 1.7 ML + 15.1. The source length vs magnitude relation for the San Andreas system found to be: ML = 1.9 log L - 6.7. The surface wave envelope parameter AR gives the moment according to log MO = log AR300 + 30.1, and the stress drop, τ, was found to be related to the magnitude by τ = 0.54 M - 2.58. The relation between surface wave magnitude MS and ML is proposed to be MS = 1.7 ML - 4.1. It is proposed to estimate the relative stress level (and possibly the strength) of a source-region by the amplitude ratio of high-frequency to low-frequency waves. An apparent stress map for Southern California is presented.
Chapter III
Seismic triggering and seismic shaking are proposed as two closely related mechanisms of strain release which explain observations of the character of the P wave generated by the Alaskan earthquake of 1964, and distant fault slippage observed after the Borrego Mountain, California earthquake of 1968. The Alaska, 1964, earthquake is shown to be adequately described as a series of individual rupture events. The first of these events had a body wave magnitude of 6.6 and is considered to have initiated or triggered the whole sequence. The propagation velocity of the disturbance is estimated to be 3.5 km/sec. On the basis of circumstantial evidence it is proposed that the Borrego Mountain, 1968, earthquake caused release of tectonic strain along three active faults at distances of 45 to 75 km from the epicenter. It is suggested that this mechanism of strain release is best described as "seismic shaking."
Chapter IV
The changes of apparent stress with depth are studied in the South American deep seismic zone. For shallow earthquakes the apparent stress is 20 bars on the average, the same as for earthquakes in the Aleutians and on Oceanic Ridges. At depths between 50 and 150 km the apparent stresses are relatively high, approximately 380 bars, and around 600 km depth they are again near 20 bars. The seismic efficiency is estimated to be 0.1. This suggests that the true stress is obtained by multiplying the apparent stress by ten. The variation of apparent stress with depth is explained in terms of the hypothesis of ocean floor consumption.
Resumo:
O uso de primers autocondicionantes tem sido proposto como uma alternativa para a redução de passos clínicos. O objetivo deste estudo clínico aleatório e controlado foi avaliar a performance de um sistema autocondicionante (Transbond Plus Self-Etching Primer, 3M Unitek - SEP) comparado a um sistema multipasso convencional (Transbond XT, 3M Unitek - TBXT) de colagem ortodôntica, durante um período de 12 meses. Vinte e oito pacientes participaram deste estudo, sendo estes alocados aos grupos SEP ou TBXT de forma aleatória, através de randomização em bloco. Um total de 548 bráquetes metálicos (Micro-Arch, prescrição Alexander, GAC International, Bohemia, NY) foram colados com uso da pasta adesiva Transbond XT (3M Unitek), sendo todos os produtos manuseados de acordo com as recomendações dos fabricantes. Foram totalizados 276 bráquetes no grupo controle e 272 no segundo. Curvas de sobrevivência Kaplan-Meier e o teste log-rank (p<0,05) foram utilizados para comparar o percentual de falhas adesivas para as duas técnicas. Ao final do período foram verificadas trinta e duas falhas adesivas (bráquetes descolados), sendo 19 (6,98%) falhas quando utilizado o primer autocondicionante (SEP) e 13 (4,71%) quando usado o primer convencional (TBXT). Não houve diferença significante entre a sobrevivência dos bráquetes entre os dois grupos avaliados (log-rank test, p=0,311). Quando a influência de gênero do paciente, arco dentário e tipo dentário (anterior ou posterior) foram analisados, somente o tipo dentário foi achado significante. Bráquetes de dentes posteriores apresentaram uma maior probabilidade de falha adesiva que os colados em dentes anteriores (p=0,013) Os autores concluem que o primer autocondicionante pode ser utilizado para colagem direta de bráquetes ortodônticos sem que sua sobrevivência clínica seja afetada.
Resumo:
STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.
It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.
In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.
Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.