905 resultados para Catalog cards
Resumo:
量纲分析是一门非常值得研究和学习的知识,它是探讨科学规律,解决科学和工程的一个有效的工具。熟练掌握量纲分析应当是科学和技术工作者应有的基本训练。
本书内容包括:量纲分析的基本概念;量纲分析在熟知的力学现象中的应用;量纲分析在某些经典的力学问题中的应用以及郑哲敏先生的研究集体近三四十年中在爆炸力学诸多的应用实例等几个部分。
写在前面
第1章 结论
1.1 量纲分析是分析和研究问题的有力手段和方法
1.2 物理量的度量
1.3 量纲:有量纲量和元量纲量
1.4 基本量和导出量
1.5 单摆
1.6 量纲分析的实质
1.7 量纲分析的简史
第2章 基本原理
2.1 量纲的幂次表示
2.2 II定理
2.3 自变量和基本量的选择
2.4 相似律
2.5 运用II的定理的注意点
第3章 流体力学问题
3.1 典型流动
3.2 流体力学问题中的相似准数[13]
3.3 其他相似准数
3.4 流体运动的分类
第4章 固体力学问题
4.1 弹性体的应力分析和简单结构的稳定性分析
4.2 弹性体的振动和波动
4.3 弹塑性体的应力分析
4.4 固体的拉伸断裂
第5章 固体中的热传导与热应力
5.1 固体中的热传导
5.2 弹性体内的热应力
第6章 流固耦合问题
6.1 水击
6.2 弹性和轴承
6.3 机翼的颤振
6.4 热交换器的气激振动
第7章 流体弹塑性模型
7.1 流体弹塑性体模型
7.2 化学炸药的爆炸效应问题中的相似参数
7.3 高速冲击问题中的相似参数
第8章 爆炸相似律
8.1 空中爆炸波和水中爆炸波
8.2 爆炸加工
8.3 爆破
第9章 冲击相似律
9.1 杆式穿甲弹
9.2 破甲——聚能射流的形成及其对装甲的侵彻
9.3 碎甲层裂
9.4 超高速冲击
9.5 金属射流与薄板的高速扩张断裂
9.6 煤与瓦斯突出——两相耦合介质动力学现象
第10章 数学模拟规整化
参考文献
主题索引
外国人名索引
Resumo:
Assateague Island is an offshore bar comprising the south-eastern coast of Maryland and the northeastern coast of Virgina. It is part of the system of discontinuous barrier reefs or bars which occupy most of the Atlantic shoreline from Florida to Massachusetts. These are unstable bars, continuously influenced by storm winds and tides which provide a distinct and rigorous habitat for the vegetation there. General floras of the Delmarva Peninusla do not mention Assateague Island specifically. The objective is to prepare a catalog of the vascular plants of Assateague Island and to describe the communities in which they are found, in the hope it will add to the knowledge of barrier reef vegetation.
Resumo:
[EN] This article investigates the question of the licensing of null arguments in the so-called pro-drop languages. By focusing on the licensing of null subjects in the different types of -T(Z)E nominalizations in Basque, it aims at defining in a precise way the crucial feature that makes pro-drop possible in a clause. The central claim is that what licenses subject-drop is the assignment of structural Case. That is, it is argued that a subject can be null if and only if it is assigned structural Case. Different aspects of T(Z)E nominalizations are also explored, which show that even if these clauses are similar in the surface, they can be syntactically very different and furthermore, that infinitive clauses marked with the same nominalizing morpheme can also have diverging structures.
Resumo:
Referência: Library of Congress - Online Catalog.
Resumo:
Traduzido em linguagem e offerecido a Assembléa Geral, Constituinte, e Legislativa do Imperio do Brazil, por R.P.B.
Resumo:
[ES] Los datos de este registro provienen de la una actividad académica que también aparece descrita en el repositorio y desde donde se puede acceder a otros trabajos relacionados con el Monasterio:
The intergalactic and circumgalactic medium surrounding star-forming galaxies at redshifts 2 < z < 3
Resumo:
We present measurements of the spatial distribution, kinematics, and physical properties of gas in the circumgalactic medium (CGM) of 2.0<z<2.8 UV color-selected galaxies as well as within the 2<z<3 intergalactic medium (IGM). These measurements are derived from Voigt profile decomposition of the full Lyα and Lyβ forest in 15 high-resolution, high signal-to-noise ratio QSO spectra resulting in a catalog of ∼6000 HI absorbers.
Chapter 2 of this thesis focuses on HI surrounding high-z star-forming galaxies drawn from the Keck Baryonic Structure Survey (KBSS). The KBSS is a unique spectroscopic survey of the distant universe designed to explore the details of the connection between galaxies and intergalactic baryons within the same survey volumes. The KBSS combines high-quality background QSO spectroscopy with large densely-sampled galaxy redshift surveys to probe the CGM at scales of ∼50 kpc to a few Mpc. Based on these data, Chapter 2 presents the first quantitative measurements of the distribution, column density, kinematics, and absorber line widths of neutral hydrogen surrounding high-z star-forming galaxies.
Chapter 3 focuses on the thermal properties of the diffuse IGM. This analysis relies on measurements of the ∼6000 absorber line widths to constrain the thermal and turbulent velocities of absorbing "clouds." A positive correlation between the column density of HI and the minimum line width is recovered and implies a temperature-density relation within the low-density IGM for which higher-density regions are hotter, as is predicted by simple theoretical arguments.
Chapter 4 presents new measurements of the opacity of the IGM and CGM to hydrogen-ionizing photons. The chapter begins with a revised measurement of the HI column density distribution based on this new absorption line catalog that, due to the inclusion of high-order Lyman lines, provides the first statistically robust measurement of the frequency of absorbers with HI column densities 14 ≲ log(NHI/cm-2) ≲ 17.2. Also presented are the first measurements of the column density distribution of HI within the CGM (50 <d < 300 pkpc) of high-z galaxies. These distributions are used to calculate the total opacity of the IGM and IGM+CGM and to revise previous measurements of the mean free path of hydrogen-ionizing photons within the IGM. This chapter also considers the effect of the surrounding CGM on the transmission of ionizing photons out of the sites of active star-formation and into the IGM.
This thesis concludes with a brief discussion of work in progress focused on understanding the distribution of metals within the CGM of KBSS galaxies. Appendix B discusses my contributions to the MOSFIRE instrumentation project.
Resumo:
Cdc48/p97 is an essential, highly abundant hexameric member of the AAA (ATPase associated with various cellular activities) family. It has been linked to a variety of processes throughout the cell but it is best known for its role in the ubiquitin proteasome pathway. In this system it is believed that Cdc48 behaves as a segregase, transducing the chemical energy of ATP hydrolysis into mechanical force to separate ubiquitin-conjugated proteins from their tightly-bound partners.
Current models posit that Cdc48 is linked to its substrates through a variety of adaptor proteins, including a family of seven proteins (13 in humans) that contain a Cdc48-binding UBX domain. As such, due to the complexity of the network of adaptor proteins for which it serves as the hub, Cdc48/p97 has the potential to exert a profound influence on the ubiquitin proteasome pathway. However, the number of known substrates of Cdc48/p97 remains relatively small, and smaller still is the number of substrates that have been linked to a specific UBX domain protein. As such, the goal of this dissertation research has been to discover new substrates and better understand the functions of the Cdc48 network. With this objective in mind, we established a proteomic screen to assemble a catalog of candidate substrate/targets of the Ubx adaptor system.
Here we describe the implementation and optimization of a cutting-edge quantitative mass spectrometry method to measure relative changes in the Saccharomyces cerevisiae proteome. Utilizing this technology, and in order to better understand the breadth of function of Cdc48 and its adaptors, we then performed a global screen to identify accumulating ubiquitin conjugates in cdc48-3 and ubxΔ mutants. In this screen different ubx mutants exhibited reproducible patterns of conjugate accumulation that differed greatly from each other, pointing to various unexpected functional specializations of the individual Ubx proteins.
As validation of our mass spectrometry findings, we then examined in detail the endoplasmic-reticulum bound transcription factor Spt23, which we identified as a putative Ubx2 substrate. In these studies ubx2Δ cells were deficient in processing of Spt23 to its active p90 form, and in localizing p90 to the nucleus. Additionally, consistent with reduced processing of Spt23, ubx2Δ cells demonstrated a defect in expression of their target gene OLE1, a fatty acid desaturase. Overall, this work demonstrates the power of proteomics as a tool to identify new targets of various pathways and reveals Ubx2 as a key regulator lipid membrane biosynthesis.
Resumo:
Galaxy clusters are the largest gravitationally bound objects in the observable universe, and they are formed from the largest perturbations of the primordial matter power spectrum. During initial cluster collapse, matter is accelerated to supersonic velocities, and the baryonic component is heated as it passes through accretion shocks. This process stabilizes when the pressure of the bound matter prevents further gravitational collapse. Galaxy clusters are useful cosmological probes, because their formation progressively freezes out at the epoch when dark energy begins to dominate the expansion and energy density of the universe. A diverse set of observables, from radio through X-ray wavelengths, are sourced from galaxy clusters, and this is useful for self-calibration. The distributions of these observables trace a cluster's dark matter halo, which represents more than 80% of the cluster's gravitational potential. One such observable is the Sunyaev-Zel'dovich effect (SZE), which results when the ionized intercluster medium blueshifts the cosmic microwave background via Compton scattering. Great technical advances in the last several decades have made regular observation of the SZE possible. Resolved SZE science, such as is explored in this analysis, has benefitted from the construction of large-format camera arrays consisting of highly sensitive millimeter-wave detectors, such as Bolocam. Bolocam is a submillimeter camera, sensitive to 140 GHz and 268 GHz radiation, located at one of the best observing sites in the world: the Caltech Submillimeter Observatory on Mauna Kea in Hawaii. Bolocam fielded 144 of the original spider web NTD bolometers used in an entire generation of ground-based, balloon-borne, and satellite-borne millimeter wave instrumention. Over approximately six years, our group at Caltech has developed a mature galaxy cluster observational program with Bolocam. This thesis describes the construction of the instrument's full cluster catalog: BOXSZ. Using this catalog, I have scaled the Bolocam SZE measurements with X-ray mass approximations in an effort to characterize the SZE signal as a viable mass probe for cosmology. This work has confirmed the SZE to be a low-scatter tracer of cluster mass. The analysis has also revealed how sensitive the SZE-mass scaling is to small biases in the adopted mass approximation. Future Bolocam analysis efforts are set on resolving these discrepancies by approximating cluster mass jointly with different observational probes.
Resumo:
Uncovering the demographics of extrasolar planets is crucial to understanding the processes of their formation and evolution. In this thesis, we present four studies that contribute to this end, three of which relate to NASA's Kepler mission, which has revolutionized the field of exoplanets in the last few years.
In the pre-Kepler study, we investigate a sample of exoplanet spin-orbit measurements---measurements of the inclination of a planet's orbit relative to the spin axis of its host star---to determine whether a dominant planet migration channel can be identified, and at what confidence. Applying methods of Bayesian model comparison to distinguish between the predictions of several different migration models, we find that the data strongly favor a two-mode migration scenario combining planet-planet scattering and disk migration over a single-mode Kozai migration scenario. While we test only the predictions of particular Kozai and scattering migration models in this work, these methods may be used to test the predictions of any other spin-orbit misaligning mechanism.
We then present two studies addressing astrophysical false positives in Kepler data. The Kepler mission has identified thousands of transiting planet candidates, and only relatively few have yet been dynamically confirmed as bona fide planets, with only a handful more even conceivably amenable to future dynamical confirmation. As a result, the ability to draw detailed conclusions about the diversity of exoplanet systems from Kepler detections relies critically on understanding the probability that any individual candidate might be a false positive. We show that a typical a priori false positive probability for a well-vetted Kepler candidate is only about 5-10%, enabling confidence in demographic studies that treat candidates as true planets. We also present a detailed procedure that can be used to securely and efficiently validate any individual transit candidate using detailed information of the signal's shape as well as follow-up observations, if available.
Finally, we calculate an empirical, non-parametric estimate of the shape of the radius distribution of small planets with periods less than 90 days orbiting cool (less than 4000K) dwarf stars in the Kepler catalog. This effort reveals several notable features of the distribution, in particular a maximum in the radius function around 1-1.25 Earth radii and a steep drop-off in the distribution larger than 2 Earth radii. Even more importantly, the methods presented in this work can be applied to a broader subsample of Kepler targets to understand how the radius function of planets changes across different types of host stars.
Resumo:
Part 1 of this thesis is about the 24 November, 1987, Superstition Hills earthquakes. The Superstition Hills earthquakes occurred in the western Imperial Valley in southern California. The earthquakes took place on a conjugate fault system consisting of the northwest-striking right-lateral Superstition Hills fault and a previously unknown Elmore Ranch fault, a northeast-striking left-lateral structure defined by surface rupture and a lineation of hypocenters. The earthquake sequence consisted of foreshocks, the M_s 6.2 first main shock, and aftershocks on the Elmore Ranch fault followed by the M_s 6.6 second main shock and aftershocks on the Superstition Hills fault. There was dramatic surface rupture along the Superstition Hills fault in three segments: the northern segment, the southern segment, and the Wienert fault.
In Chapter 2, M_L≥4.0 earthquakes from 1945 to 1971 that have Caltech catalog locations near the 1987 sequence are relocated. It is found that none of the relocated earthquakes occur on the southern segment of the Superstition Hills fault and many occur at the intersection of the Superstition Hills and Elmore Ranch faults. Also, some other northeast-striking faults may have been active during that time.
Chapter 3 discusses the Superstition Hills earthquake sequence using data from the Caltech-U.S.G.S. southern California seismic array. The earthquakes are relocated and their distribution correlated to the type and arrangement of the basement rocks. The larger earthquakes occur only where continental crystalline basement rocks are present. The northern segment of the Superstition Hills fault has more aftershocks than the southern segment.
An inversion of long period teleseismic data of the second mainshock of the 1987 sequence, along the Superstition Hills fault, is done in Chapter 4. Most of the long period seismic energy seen teleseismically is radiated from the southern segment of the Superstition Hills fault. The fault dip is near vertical along the northern segment of the fault and steeply southwest dipping along the southern segment of the fault.
Chapter 5 is a field study of slip and afterslip measurements made along the Superstition Hills fault following the second mainshock. Slip and afterslip measurements were started only two hours after the earthquake. In some locations, afterslip more than doubled the coseismic slip. The northern and southern segments of the Superstition Hills fault differ in the proportion of coseismic and postseismic slip to the total slip.
The northern segment of the Superstition Hills fault had more aftershocks, more historic earthquakes, released less teleseismic energy, and had a smaller proportion of afterslip to total slip than the southern segment. The boundary between the two segments lies at a step in the basement that separates a deeper metasedimentary basement to the south from a shallower crystalline basement to the north.
Part 2 of the thesis deals with the three-dimensional velocity structure of southern California. In Chapter 7, an a priori three-dimensional crustal velocity model is constructed by partitioning southern California into geologic provinces, with each province having a consistent one-dimensional velocity structure. The one-dimensional velocity structures of each region were then assembled into a three-dimensional model. The three-dimension model was calibrated by forward modeling of explosion travel times.
In Chapter 8, the three-dimensional velocity model is used to locate earthquakes. For about 1000 earthquakes relocated in the Los Angeles basin, the three-dimensional model has a variance of the the travel time residuals 47 per cent less than the catalog locations found using a standard one-dimensional velocity model. Other than the 1987 Whittier earthquake sequence, little correspondence is seen between these earthquake locations and elements of a recent structural cross section of the Los Angeles basin. The Whittier sequence involved rupture of a north dipping thrust fault bounded on at least one side by a strike-slip fault. The 1988 Pasadena earthquake was deep left-lateral event on the Raymond fault. The 1989 Montebello earthquake was a thrust event on a structure similar to that on which the Whittier earthquake occurred. The 1989 Malibu earthquake was a thrust or oblique slip event adjacent to the 1979 Malibu earthquake.
At least two of the largest recent thrust earthquakes (San Fernando and Whittier) in the Los Angeles basin have had the extent of their thrust plane ruptures limited by strike-slip faults. This suggests that the buried thrust faults underlying the Los Angeles basin are segmented by strike-slip faults.
Earthquake and explosion travel times are inverted for the three-dimensional velocity structure of southern California in Chapter 9. The inversion reduced the variance of the travel time residuals by 47 per cent compared to the starting model, a reparameterized version of the forward model of Chapter 7. The Los Angeles basin is well resolved, with seismically slow sediments atop a crust of granitic velocities. Moho depth is between 26 and 32 km.
Resumo:
Multi-finger caging offers a rigorous and robust approach to robot grasping. This thesis provides several novel algorithms for caging polygons and polyhedra in two and three dimensions. Caging refers to a robotic grasp that does not necessarily immobilize an object, but prevents it from escaping to infinity. The first algorithm considers caging a polygon in two dimensions using two point fingers. The second algorithm extends the first to three dimensions. The third algorithm considers caging a convex polygon in two dimensions using three point fingers, and considers robustness of this cage to variations in the relative positions of the fingers.
This thesis describes an algorithm for finding all two-finger cage formations of planar polygonal objects based on a contact-space formulation. It shows that two-finger cages have several useful properties in contact space. First, the critical points of the cage representation in the hand’s configuration space appear as critical points of the inter-finger distance function in contact space. Second, these critical points can be graphically characterized directly on the object’s boundary. Third, contact space admits a natural rectangular decomposition such that all critical points lie on the rectangle boundaries, and the sublevel sets of contact space and free space are topologically equivalent. These properties lead to a caging graph that can be readily constructed in contact space. Starting from a desired immobilizing grasp of a polygonal object, the caging graph is searched for the minimal, intermediate, and maximal caging regions surrounding the immobilizing grasp. An example constructed from real-world data illustrates and validates the method.
A second algorithm is developed for finding caging formations of a 3D polyhedron for two point fingers using a lower dimensional contact-space formulation. Results from the two-dimensional algorithm are extended to three dimension. Critical points of the inter-finger distance function are shown to be identical to the critical points of the cage. A decomposition of contact space into 4D regions having useful properties is demonstrated. A geometric analysis of the critical points of the inter-finger distance function results in a catalog of grasps in which the cages change topology, leading to a simple test to classify critical points. With these properties established, the search algorithm from the two-dimensional case may be applied to the three-dimensional problem. An implemented example demonstrates the method.
This thesis also presents a study of cages of convex polygonal objects using three point fingers. It considers a three-parameter model of the relative position of the fingers, which gives complete generality for three point fingers in the plane. It analyzes robustness of caging grasps to variations in the relative position of the fingers without breaking the cage. Using a simple decomposition of free space around the polygon, we present an algorithm which gives all caging placements of the fingers and a characterization of the robustness of these cages.
Resumo:
This publication gives the results of the bottom trawlings made during the cruises Togo 3 and logo 4 by the oceanographic research vessel "Andre NIZERY" on the continental shelf of Togo during the estimation program of halieutic resources. The report includes: 1 - The report of the cruises Togo 3 and Togo 4 2 - Some information on the presentation of the results 3 - The trawl recording cards for the 2 cruises 4 - The length frequency distributions of the measured samples.
Resumo:
This publication gives the results of bottom trawlings made during the cruises Chalci 83.01 and Chalci 83.02 by the oceanographic research vessel "André Nizery", on the ivorian continental shelf. The publication content is: - an introduction explaining the form of results; the trawl recording cards with the characteristics of trawl and the detail of catches by species; the length frequency distributions of the measured samples.
Resumo:
This publication gives the results of the bottom trawlings made during the cruises Togo 1 and Togo 2 by the oceanographic research vessel "Andre Nizery" on the continental shelf of Togo during the estimation program of halieutic resources. The report includes: 1 - The report of the cruises Togo 1 and Togo 2; 2 - Some information on the presentation of the results; 3 - The trawl recording cards for the 2 cruises; 4 - The length frequency distributions of the measured samples.