994 resultados para Nature observation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Supporting presentation slides from the Janet network end to end performance initiative

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A história Bibliográfica de Sertum palmarum brasiliensium... é a seguinte: em 1900, o Deputado Federal por São Paulo, D. Augusto César Miranda Azevedo, apresentou ao Congresso Nacional uma emenda à proposta orçamentária para 1901, autorizando o Poder Executivo "a mandar imprimir na Imprensa Nacional o texto e as estampas da monografia sobre palmeiras do botânico brasileiro Dr. João Barbosa Rodrigues..." Entretanto, a autorização não pôde ser efetivada porque a Imprensa Nacional não dispunha de equipamento para a reprodução de estampas. Em 1901, o Senador pelo Estado do Pará, Dr. Lauro Sodré, apresentou emenda à proposta orçamentária para 1902, cuja lei estabeleceu no art. 6º : "é o Governo autorizado a mandar imprimir na Europa ou em país onde houver mais vantagem, a obra Sertum palmarum..., abrindo para tal fim o necessário crédito, e de acordo com o autor". O crédito -- duzentos contos de réis -- foi finalmente, aberto o Decreto nº 4.428, de 12 de junho de 1902, assinado por Campos Sales e referendado por Sabino Barroso, Ministro da Justiça. A obra foi impressa em Bruxelas sob a direção do próprio Barbosa Rodrigues, que, para esse fim ali se demorou nove meses. Trata-se de trabalho gráfico de primeira ordem, tanto pela reprodução colorida das estampas, como pelo texto e encadernação.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many sources of information that discuss currents problems of food security point to the importance of farmed fish as an ideal food source that can be grown by poor farmers, (Asian Development Bank 2004). Furthermore, the development of improved strains of fish suitable for low-input aquaculture such as Tilapia, has demonstrated the feasibility of an approach that combines “cutting edge science” with accessible technology, as a means for improving the nutrition and livelihoods of both the urban poor and poor farmers in developing countries (Mair et al. 2002). However, the use of improved strains of fish as a means of reducing hunger and improving livelihoods has proved to be difficult to sustain, especially as a public good, when external (development) funding sources devoted to this area are minimal1. In addition, the more complicated problem of delivery of an aquaculture system, not just improved fish strains and the technology, can present difficulties and may go explicitly unrecognized (from Sissel Rogne, as cited by Silje Rem 2002). Thus, the involvement of private partners has featured prominently in the strategy for transferring to the public technology related to improved Tilapia strains. Partnering with the private sector in delivery schemes to the poor should take into account both the public goods aspect and the requirement that the traits selected for breeding “improved” strains meet the actual needs of the resource poor farmer. Other dissemination approaches involving the public sector may require a large investment in capacity building. However, the use of public sector institutions as delivery agents encourages the maintaining of the “public good” nature of the products.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RRAs were carried out in two Small Tank Cascade systems (STCs) of North West Province, Sri Lanka (less than 1000 ha total watershed area). A total of 21 tanks and 7 villages were investigated with primary emphasis on two upper watershed communities. The two systems differ primarily in their resource base; namely rainfall, natural forests and proximity to large scale perennial irrigation resources. [PDF contains 86 pages]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show that a category of one-dimensional XY-type models may enable high-fidelity quantum state transmissions, regardless of details of coupling configurations. This observation leads to a fault-tolerant design of a state transmission setup. The setup is fault-tolerant, with specified thresholds, against engineering failures of coupling configurations, fabrication imperfections or defects, and even time-dependent noises. We propose an experimental implementation of the fault-tolerant scheme using hard-core bosons in one-dimensional optical lattices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We measured noninvasively step velocities of elementary two-dimensional (2D) islands on {110} faces of tetragonal lysozyme crystals, under various supersaturations, by laser confocal microscopy combined with differential interference contrast microscopy. We studied the correlation between the effects of protein impurities on the growth of elementary steps and their adsorption sites on a crystal surface, using three kinds of proteins: fluorescent-labeled lysozyme (F-lysozyme), covalently bonded dimers of lysozyme (dimer), and a 18 kDa polypeptide (18 kDa). These three protein impurities suppressed the advancement of the steps. However, they exhibited different supersaturation dependencies of the suppression of the step velocities. To clarify the cause of this difference, we observed in situ the adsorption sites of individual molecules of F-lysozyme and fluorescent-labeled dimer (F-dimer) on the crystal surface by single-molecule visualization. We found that F-lysozyme adsorbed preferentially on steps (i.e., kinks), whereas F-dimer adsorbed randomly on terraces. Taking into account the different adsorption sites of F-lysozyme and F-dimer, we could successfully explain the different effects of the impurities on the step velocities. These observations strongly suggest that 18 kDa also adsorbs randomly on terraces. Seikagaku lysozyme exhibited a complex effect that could not alone be explained by the two major impurities (dimer and 18 kDa) present in Seikagaku lysozyme, indicating that trace amounts of other impurities significantly affect the step advancement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In addition to providing vital ecological services, coastal areas of North Carolina provide prized areas for habitation, recreation, and commercial fisheries. However, from a management perspective, the coasts of North Carolina are highly variable and complex. In-water constituents such as nutrients, suspended sediments, and chlorophyll a concentration can vary significantly over a broad spectrum of time and space scales. Rapid growth and land-use change continue to exert pressure on coastal lands. Coastal environments are also very vulnerable to short-term (e.g., hurricanes) and long-term (e.g., sea-level rise) natural changes that can result in significant loss of life, economic loss, or changes in coastal ecosystem functioning. Hence, the dynamic nature, effects of human-induced change over time, and vulnerability of coastal areas make it difficult to effectively monitor and manage these important state and national resources using traditional data collection technologies such as discrete monitoring stations and field surveys. In general, these approaches provide only a sparse network of data over limited time and space scales and generally are expensive and labor-intensive. Products derived from spectral images obtained by remote sensing instruments provide a unique vantage point from which to examine the dynamic nature of coastal environments. A primary advantage of remote sensing is that the altitude of observation provides a large-scale synoptic view relative to traditional field measurements. Equally important, the use of remote sensing for a broad range of research and environmental applications is now common due to major advances in data availability, data transfer, and computer technologies. To facilitate the widespread use of remote sensing products in North Carolina, the UNC Coastal Studies Institute (UNC-CSI) is developing the capability to acquire, process, and analyze remotely sensed data from several remote sensing instruments. In particular, UNC-CSI is developing regional remote sensing algorithms to examine the mobilization, transport, transformation, and fate of materials between coupled terrestrial and coastal ocean systems. To illustrate this work, we present the basic principles of remote sensing of coastal waters in the context of deriving information that supports efficient and effective management of coastal resources. (PDF contains 4 pages)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Faults can slip either aseismically or through episodic seismic ruptures, but we still do not understand the factors which determine the partitioning between these two modes of slip. This challenge can now be addressed thanks to the dense set of geodetic and seismological networks that have been deployed in various areas with active tectonics. The data from such networks, as well as modern remote sensing techniques, indeed allow documenting of the spatial and temporal variability of slip mode and give some insight. This is the approach taken in this study, which is focused on the Longitudinal Valley Fault (LVF) in Eastern Taiwan. This fault is particularly appropriate since the very fast slip rate (about 5 cm/yr) is accommodated by both seismic and aseismic slip. Deformation of anthropogenic features shows that aseismic creep accounts for a significant fraction of fault slip near the surface, but this fault also released energy seismically, since it has produced five M_w>6.8 earthquakes in 1951 and 2003. Moreover, owing to the thrust component of slip, the fault zone is exhumed which allows investigation of deformation mechanisms. In order to put constraint on the factors that control the mode of slip, we apply a multidisciplinary approach that combines modeling of geodetic observations, structural analysis and numerical simulation of the "seismic cycle". Analyzing a dense set of geodetic and seismological data across the Longitudinal Valley, including campaign-mode GPS, continuous GPS (cGPS), leveling, accelerometric, and InSAR data, we document the partitioning between seismic and aseismic slip on the fault. For the time period 1992 to 2011, we found that about 80-90% of slip on the LVF in the 0-26 km seismogenic depth range is actually aseismic. The clay-rich Lichi M\'elange is identified as the key factor promoting creep at shallow depth. Microstructural investigations show that deformation within the fault zone must have resulted from a combination of frictional sliding at grain boundaries, cataclasis and pressure solution creep. Numerical modeling of earthquake sequences have been performed to investigate the possibility of reproducing the results from the kinematic inversion of geodetic and seismological data on the LVF. We first investigate the different modeling strategy that was developed to explore the role and relative importance of different factors on the manner in which slip accumulates on faults. We compare the results of quasi dynamic simulations and fully dynamic ones, and we conclude that ignoring the transient wave-mediated stress transfers would be inappropriate. We therefore carry on fully dynamic simulations and succeed in qualitatively reproducing the wide range of observations for the southern segment of the LVF. We conclude that the spatio-temporal evolution of fault slip on the Longitudinal Valley Fault over 1997-2011 is consistent to first order with prediction from a simple model in which a velocity-weakening patch is embedded in a velocity-strengthening area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Seismic reflection methods have been extensively used to probe the Earth's crust and suggest the nature of its formative processes. The analysis of multi-offset seismic reflection data extends the technique from a reconnaissance method to a powerful scientific tool that can be applied to test specific hypotheses. The treatment of reflections at multiple offsets becomes tractable if the assumptions of high-frequency rays are valid for the problem being considered. Their validity can be tested by applying the methods of analysis to full wave synthetics.

Three studies illustrate the application of these principles to investigations of the nature of the crust in southern California. A survey shot by the COCORP consortium in 1977 across the San Andreas fault near Parkfield revealed events in the record sections whose arrival time decreased with offset. The reflectors generating these events are imaged using a multi-offset three-dimensional Kirchhoff migration. Migrations of full wave acoustic synthetics having the same limitations in geometric coverage as the field survey demonstrate the utility of this back projection process for imaging. The migrated depth sections show the locations of the major physical boundaries of the San Andreas fault zone. The zone is bounded on the southwest by a near-vertical fault juxtaposing a Tertiary sedimentary section against uplifted crystalline rocks of the fault zone block. On the northeast, the fault zone is bounded by a fault dipping into the San Andreas, which includes slices of serpentinized ultramafics, intersecting it at 3 km depth. These interpretations can be made despite complications introduced by lateral heterogeneities.

In 1985 the Calcrust consortium designed a survey in the eastern Mojave desert to image structures in both the shallow and the deep crust. Preliminary field experiments showed that the major geophysical acquisition problem to be solved was the poor penetration of seismic energy through a low-velocity surface layer. Its effects could be mitigated through special acquisition and processing techniques. Data obtained from industry showed that quality data could be obtained from areas having a deeper, older sedimentary cover, causing a re-definition of the geologic objectives. Long offset stationary arrays were designed to provide reversed, wider angle coverage of the deep crust over parts of the survey. The preliminary field tests and constant monitoring of data quality and parameter adjustment allowed 108 km of excellent crustal data to be obtained.

This dataset, along with two others from the central and western Mojave, was used to constrain rock properties and the physical condition of the crust. The multi-offset analysis proceeded in two steps. First, an increase in reflection peak frequency with offset is indicative of a thinly layered reflector. The thickness and velocity contrast of the layering can be calculated from the spectral dispersion, to discriminate between structures resulting from broad scale or local effects. Second, the amplitude effects at different offsets of P-P scattering from weak elastic heterogeneities indicate whether the signs of the changes in density, rigidity, and Lame's parameter at the reflector agree or are opposed. The effects of reflection generation and propagation in a heterogeneous, anisotropic crust were contained by the design of the experiment and the simplicity of the observed amplitude and frequency trends. Multi-offset spectra and amplitude trend stacks of the three Mojave Desert datasets suggest that the most reflective structures in the middle crust are strong Poisson's ratio (σ) contrasts. Porous zones or the juxtaposition of units of mutually distant origin are indicated. Heterogeneities in σ increase towards the top of a basal crustal zone at ~22 km depth. The transition to the basal zone and to the mantle include increases in σ. The Moho itself includes ~400 m layering having a velocity higher than that of the uppermost mantle. The Moho maintains the same configuration across the Mojave despite 5 km of crustal thinning near the Colorado River. This indicates that Miocene extension there either thinned just the basal zone, or that the basal zone developed regionally after the extensional event.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work is concerned with the derivation of optimal scaling laws, in the sense of matching lower and upper bounds on the energy, for a solid undergoing ductile fracture. The specific problem considered concerns a material sample in the form of an infinite slab of finite thickness subjected to prescribed opening displacements on its two surfaces. The solid is assumed to obey deformation-theory of plasticity and, in order to further simplify the analysis, we assume isotropic rigid-plastic deformations with zero plastic spin. When hardening exponents are given values consistent with observation, the energy is found to exhibit sublinear growth. We regularize the energy through the addition of nonlocal energy terms of the strain-gradient plasticity type. This nonlocal regularization has the effect of introducing an intrinsic length scale into the energy. We also put forth a physical argument that identifies the intrinsic length and suggests a linear growth of the nonlocal energy. Under these assumptions, ductile fracture emerges as the net result of two competing effects: whereas the sublinear growth of the local energy promotes localization of deformation to failure planes, the nonlocal regularization stabilizes this process, thus resulting in an orderly progression towards failure and a well-defined specific fracture energy. The optimal scaling laws derived here show that ductile fracture results from localization of deformations to void sheets, and that it requires a well-defined energy per unit fracture area. In particular, fractal modes of fracture are ruled out under the assumptions of the analysis. The optimal scaling laws additionally show that ductile fracture is cohesive in nature, i.e., it obeys a well-defined relation between tractions and opening displacements. Finally, the scaling laws supply a link between micromechanical properties and macroscopic fracture properties. In particular, they reveal the relative roles that surface energy and microplasticity play as contributors to the specific fracture energy of the material. Next, we present an experimental assessment of the optimal scaling laws. We show that when the specific fracture energy is renormalized in a manner suggested by the optimal scaling laws, the data falls within the bounds predicted by the analysis and, moreover, they ostensibly collapse---with allowances made for experimental scatter---on a master curve dependent on the hardening exponent, but otherwise material independent.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The synthesis and direct observation of 1,1-di-tert-butyldiazene (16) at -127°C is described. The absorption spectrum of a red solution of 1,1-diazene 16 reveals a structured absorption band with λ max at 506 run (Me_2O, -125°C). The vibrational spacing in S_1 is about 1200 cm^(-1). The excited state of 16 emits weakly with a single maximum at 715 run observed in the fluorescence spectrum (Me_2O:CD_2Cl_2, -196°C). The proton NMR spectrum of 16 occurs as a singlet at 1.41 ppm. Monitoring this NMR absorption at -94^0 ± 2°C shows that 1,1-diazene 16 decomposes with a first-order rate of 1.8 x 10^(-3) sec(-1) to form isobutane, isobutylene and hexarnethylethane. This rate is 10^8 and 10^(34) times faster than the thermal decomposition of the corresponding cis and trans 1,2-di-tert-butyldiazene isomers. The free energy of activation for decomposition of 1,1-diazene 16 is found to be 12.5 ± 0.2 kcal/mol at -94°C which is much lower than the values of 19.1 and 19.4 kcal/lmole calculated at -94°C for N-(2,2,6,6- tetramethylpiperidyl)nitrene (3) and N-(2,2,5,5- tetrarnethylpyrrolidyl)nitrene (4), respectively. This difference between 16 and the cyclic-1,1-diazenes 3 and 4 can be attributed to a large steric interaction between the tert-butyl groups in 1,1-diazene 16.

In order to investigate the nature of the singlet-triplet gap in 1,1-diazenes, 2,5-di-tert-butyl-N-pyrrolynitrene (22) was generated but was found to be too reactive towards dimerization to be persistent. In the presence of dimethylsulfoxide, however, N-pyrrolynitrene (22) can be trapped as N-(2,5-di-tert-butyl- N'-pyrrolyl)dimethylsulfoxirnine (38). N-(2,5-di-tert-butyl-N'-pyrrolyl)dimethylsulfoximine (38-d^6) exchanges with free dimethylsulfoxide at 50°C in solution, presumably by generation and retrapping of pyrrolynitrene 22.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Madden-Julian Oscillation (MJO) is a pattern of intense rainfall and associated planetary-scale circulations in the tropical atmosphere, with a recurrence interval of 30-90 days. Although the MJO was first discovered 40 years ago, it is still a challenge to simulate the MJO in general circulation models (GCMs), and even with simple models it is difficult to agree on the basic mechanisms. This deficiency is mainly due to our poor understanding of moist convection—deep cumulus clouds and thunderstorms, which occur at scales that are smaller than the resolution elements of the GCMs. Moist convection is the most important mechanism for transporting energy from the ocean to the atmosphere. Success in simulating the MJO will improve our understanding of moist convection and thereby improve weather and climate forecasting.

We address this fundamental subject by analyzing observational datasets, constructing a hierarchy of numerical models, and developing theories. Parameters of the models are taken from observation, and the simulated MJO fits the data without further adjustments. The major findings include: 1) the MJO may be an ensemble of convection events linked together by small-scale high-frequency inertia-gravity waves; 2) the eastward propagation of the MJO is determined by the difference between the eastward and westward phase speeds of the waves; 3) the planetary scale of the MJO is the length over which temperature anomalies can be effectively smoothed by gravity waves; 4) the strength of the MJO increases with the typical strength of convection, which increases in a warming climate; 5) the horizontal scale of the MJO increases with the spatial frequency of convection; and 6) triggered convection, where potential energy accumulates until a threshold is reached, is important in simulating the MJO. Our findings challenge previous paradigms, which consider the MJO as a large-scale mode, and point to ways for improving the climate models.