899 resultados para INTERSECTION
Resumo:
12 p.
Resumo:
The influence of contact angle and tube radius on the capillary-driven flow for circular cylindrical tubes is studied systematically by microgravity experiments using the drop tower. Experimental results show that the velocity of the capillary flow decreases monotonically with an increase in the contact angle. However, the time-evolution of the velocity of the capillary flow is different for different sized tubes. At the beginning of the microgravity period, the capillary flow in a thinner tube moves faster than that in a thicker tube, and then the latter overtakes the former. Therefore, there is an intersection between the curves of meniscus velocity vs microgravity time for two differently sized tubes. In addition, for two given sized tubes this intersection is delayed when the contact angle increases. The experimental results are analyzed theoretically and also supported by numerical computations.
Resumo:
The aim of this paper is to propose a new solution for the roommate problem with strict preferences. We introduce the solution of maximum irreversibility and consider almost stable matchings (Abraham et al. [2])and maximum stable matchings (Ta [30] [32]). We find that almost stable matchings are incompatible with the other two solutions. Hence, to solve the roommate problem we propose matchings that lie at the intersection of the maximum irreversible matchings and maximum stable matchings, which are called Q-stable matchings. These matchings are core consistent and we offer an effi cient algorithm for computing one of them. The outcome of the algorithm belongs to an absorbing set.
Resumo:
Sea level rise (SLR) assessments are commonly used to identify the extent that coastal populations are at risk to flooding. However, the data and assumptions used to develop these assessments contain numerous sources and types of uncertainty, which limit confidence in the accuracy of modeled results. This study illustrates how the intersection of uncertainty in digital elevation models (DEMs) and SLR lead to a wide range of modeled outcomes. SLR assessments are then reviewed to identify the extent that uncertainty is documented in peer-reviewed articles. The paper concludes by discussing priorities needed to further understand SLR impacts. (PDF contains 4 pages)
Resumo:
Methodology for the preparation of allenes from propargylic hydrazine precursors under mild conditions is described. Oxidation of the propargylic hydrazines, which can be readily prepared from propargylic alcohols, with either of two azo oxidants, diethyl azodicarboxylate (DEAD) or 4-methyl 1,2-triazoline-3,5-dione (MTAD), effects conversion to the allenes, presumably via sigmatropic rearrangement of a monoalkyl diazene intermediate. This rearrangement is demonstrated to proceed with essentially complete stereospecificity. The application of this methodology to the preparation of other allenes, including two that are notable for their reactivity and thermal instability, is also described.
The structural and mechanistic study of a monoalkyl diazene intermediate in the oxidative transformation of propargylic hydrazines to allenes is described. The use of long-range heteronuclear NMR coupling constants for assigning monoalkyl diazene stereochemistry (E vs Z) is also discussed. Evidence is presented that all known monoalkyl diazenes are the E isomers, and the erroneous assignment of stereochemistry in the previous report of the preparation of (Z)-phenyldiazene is discussed.
The synthesis, characterization, and reactivity of 1,6-didehydro[10]annulene are described. This molecule has been recognized as an interesting synthetic target for over 40 years and represents the intersection of two sets of extensively studied molecules: nonbenzenoid aromatic compounds and molecules containing sterically compressed π-systems.The formation of 1,5-dehydronaphthalene from 1 ,6-didehydro[10]annulene is believed to be the prototype for cycloaromatizations that produce 1,4-dehydroaromatic species with the radical centers disposed anti about the newly formed single bond. The aromaticity of this annulene and the facility of its cycloaromatization are also analyzed.
Resumo:
The theories of relativity and quantum mechanics, the two most important physics discoveries of the 20th century, not only revolutionized our understanding of the nature of space-time and the way matter exists and interacts, but also became the building blocks of what we currently know as modern physics. My thesis studies both subjects in great depths --- this intersection takes place in gravitational-wave physics.
Gravitational waves are "ripples of space-time", long predicted by general relativity. Although indirect evidence of gravitational waves has been discovered from observations of binary pulsars, direct detection of these waves is still actively being pursued. An international array of laser interferometer gravitational-wave detectors has been constructed in the past decade, and a first generation of these detectors has taken several years of data without a discovery. At this moment, these detectors are being upgraded into second-generation configurations, which will have ten times better sensitivity. Kilogram-scale test masses of these detectors, highly isolated from the environment, are probed continuously by photons. The sensitivity of such a quantum measurement can often be limited by the Heisenberg Uncertainty Principle, and during such a measurement, the test masses can be viewed as evolving through a sequence of nearly pure quantum states.
The first part of this thesis (Chapter 2) concerns how to minimize the adverse effect of thermal fluctuations on the sensitivity of advanced gravitational detectors, thereby making them closer to being quantum-limited. My colleagues and I present a detailed analysis of coating thermal noise in advanced gravitational-wave detectors, which is the dominant noise source of Advanced LIGO in the middle of the detection frequency band. We identified the two elastic loss angles, clarified the different components of the coating Brownian noise, and obtained their cross spectral densities.
The second part of this thesis (Chapters 3-7) concerns formulating experimental concepts and analyzing experimental results that demonstrate the quantum mechanical behavior of macroscopic objects - as well as developing theoretical tools for analyzing quantum measurement processes. In Chapter 3, we study the open quantum dynamics of optomechanical experiments in which a single photon strongly influences the quantum state of a mechanical object. We also explain how to engineer the mechanical oscillator's quantum state by modifying the single photon's wave function.
In Chapters 4-5, we build theoretical tools for analyzing the so-called "non-Markovian" quantum measurement processes. Chapter 4 establishes a mathematical formalism that describes the evolution of a quantum system (the plant), which is coupled to a non-Markovian bath (i.e., one with a memory) while at the same time being under continuous quantum measurement (by the probe field). This aims at providing a general framework for analyzing a large class of non-Markovian measurement processes. Chapter 5 develops a way of characterizing the non-Markovianity of a bath (i.e.,whether and to what extent the bath remembers information about the plant) by perturbing the plant and watching for changes in the its subsequent evolution. Chapter 6 re-analyzes a recent measurement of a mechanical oscillator's zero-point fluctuations, revealing nontrivial correlation between the measurement device's sensing noise and the quantum rack-action noise.
Chapter 7 describes a model in which gravity is classical and matter motions are quantized, elaborating how the quantum motions of matter are affected by the fact that gravity is classical. It offers an experimentally plausible way to test this model (hence the nature of gravity) by measuring the center-of-mass motion of a macroscopic object.
The most promising gravitational waves for direct detection are those emitted from highly energetic astrophysical processes, sometimes involving black holes - a type of object predicted by general relativity whose properties depend highly on the strong-field regime of the theory. Although black holes have been inferred to exist at centers of galaxies and in certain so-called X-ray binary objects, detecting gravitational waves emitted by systems containing black holes will offer a much more direct way of observing black holes, providing unprecedented details of space-time geometry in the black-holes' strong-field region.
The third part of this thesis (Chapters 8-11) studies black-hole physics in connection with gravitational-wave detection.
Chapter 8 applies black hole perturbation theory to model the dynamics of a light compact object orbiting around a massive central Schwarzschild black hole. In this chapter, we present a Hamiltonian formalism in which the low-mass object and the metric perturbations of the background spacetime are jointly evolved. Chapter 9 uses WKB techniques to analyze oscillation modes (quasi-normal modes or QNMs) of spinning black holes. We obtain analytical approximations to the spectrum of the weakly-damped QNMs, with relative error O(1/L^2), and connect these frequencies to geometrical features of spherical photon orbits in Kerr spacetime. Chapter 11 focuses mainly on near-extremal Kerr black holes, we discuss a bifurcation in their QNM spectra for certain ranges of (l,m) (the angular quantum numbers) as a/M → 1. With tools prepared in Chapter 9 and 10, in Chapter 11 we obtain an analytical approximate for the scalar Green function in Kerr spacetime.
Resumo:
A tese analisa a relação íntima que há entre o pragmatismo ou o conseqüencialismo e a modulação temporal dos efeitos das decisões judiciais. Nesta relação, interessa ressaltar o ponto de interseção que certamente sobressai em várias ocasiões: o argumento de cunho econômico. Tal tipo de argumento pode assumir especial relevo quando do exame da oportunidade e conveniência na tomada das decisões eminentemente políticas. No âmbito jurisdicional, no entanto, o argumento pragmático ou consequencialista de cunho econômico não deve prevalecer como fundamento das decisões judiciais, especialmente cuidando-se de matéria tributária. Os problemas que centralizam o estudo podem ser colocados através das seguintes indagações: é possível que o Supremo Tribunal Federal compute, no julgamento de certa matéria tributária, argumento como o eventual rombo de X bilhões de reais que a decisão contrária ao Fisco possa acarretar para os cofres públicos? A fundamentação de eventual decisão judicial calcada exclusiva ou predominantemente em tal argumento é legítima ou ilegítima? Que importância pode ter na tomada de decisão judicial? Quando aplicada, há parâmetros a serem seguidos? Quais? Demonstramos que a prevalência de tal argumento é inadequada na seara judicial, ou seja, deve ter peso reduzido ou periférico, servindo para corroborar ou reforçar os argumentos jurídicos que centralizam o debate submetido ao exame do Poder Judiciário de modo geral, e do Supremo Tribunal Federal, de maneira particular. Em busca de esclarecer quais os principais limites e possibilidades de tal argumento, especialmente relacionando-o à modulação temporal dos efeitos da decisão judicial, explicitamos algumas regras necessárias para a sua adequada utilização, sob pena de inconcebível subversão de variados princípios e direitos fundamentais assegurados em sede constitucional. No exame das questões submetidas à apreciação da Corte Suprema em matéria tributária, o seu parâmetro consiste na maior efetividade e concretude ao texto constitucional. A modulação temporal dos efeitos se aplica a uma decisão que, declarando a inconstitucionalidade do ato normativo, se afastaria ainda mais da vontade constitucional, caso fosse aplicado o tradicional efeito ex tunc (retroativo até o nascimento da lei). Nestas situações específicas e excepcionais se justifica aplicar a modulação, com vistas a dar maior concretude e emprestar maior eficácia à Constituição. A tese proposta, ao final, consiste na reunião das regras explicitadas no trabalho e em proposta legislativa.
Resumo:
This investigation demonstrates an application of a flexible wall nozzle for testing in a supersonic wind tunnel. It is conservative to say that the versatility of this nozzle is such that it warrants the expenditure of time to carefully engineer a nozzle and incorporate it in the wind tunnel as a permanent part of the system. The gradients in the test section were kept within one percent of the calibrated Mach number, however, the gradients occurring over the bodies tested were only ± 0.2 percent in Mach number.
The conditions existing on a finite cone with a vertex angle of 75° were investigated by considering the pressure distribution on the cone and the shape of the shock wave. The pressure distribution on the surface of the 75° cone when based on upstream conditions does not show any discontinuities at the theoretical attachment Mach number.
Both the angle of the shock wave and the pressure distribution of the 75° cone are in very close agreement with the theoretical values given in the Kopal report, (Ref. 3).
The location of the intersection of the sonic line with the surface of the cone and with the shock wave are given for the cone. The blocking characteristics of the GALCIT supersonic wind tunnel were investigated with a series of 60° cones.
Resumo:
Part 1 of this thesis is about the 24 November, 1987, Superstition Hills earthquakes. The Superstition Hills earthquakes occurred in the western Imperial Valley in southern California. The earthquakes took place on a conjugate fault system consisting of the northwest-striking right-lateral Superstition Hills fault and a previously unknown Elmore Ranch fault, a northeast-striking left-lateral structure defined by surface rupture and a lineation of hypocenters. The earthquake sequence consisted of foreshocks, the M_s 6.2 first main shock, and aftershocks on the Elmore Ranch fault followed by the M_s 6.6 second main shock and aftershocks on the Superstition Hills fault. There was dramatic surface rupture along the Superstition Hills fault in three segments: the northern segment, the southern segment, and the Wienert fault.
In Chapter 2, M_L≥4.0 earthquakes from 1945 to 1971 that have Caltech catalog locations near the 1987 sequence are relocated. It is found that none of the relocated earthquakes occur on the southern segment of the Superstition Hills fault and many occur at the intersection of the Superstition Hills and Elmore Ranch faults. Also, some other northeast-striking faults may have been active during that time.
Chapter 3 discusses the Superstition Hills earthquake sequence using data from the Caltech-U.S.G.S. southern California seismic array. The earthquakes are relocated and their distribution correlated to the type and arrangement of the basement rocks. The larger earthquakes occur only where continental crystalline basement rocks are present. The northern segment of the Superstition Hills fault has more aftershocks than the southern segment.
An inversion of long period teleseismic data of the second mainshock of the 1987 sequence, along the Superstition Hills fault, is done in Chapter 4. Most of the long period seismic energy seen teleseismically is radiated from the southern segment of the Superstition Hills fault. The fault dip is near vertical along the northern segment of the fault and steeply southwest dipping along the southern segment of the fault.
Chapter 5 is a field study of slip and afterslip measurements made along the Superstition Hills fault following the second mainshock. Slip and afterslip measurements were started only two hours after the earthquake. In some locations, afterslip more than doubled the coseismic slip. The northern and southern segments of the Superstition Hills fault differ in the proportion of coseismic and postseismic slip to the total slip.
The northern segment of the Superstition Hills fault had more aftershocks, more historic earthquakes, released less teleseismic energy, and had a smaller proportion of afterslip to total slip than the southern segment. The boundary between the two segments lies at a step in the basement that separates a deeper metasedimentary basement to the south from a shallower crystalline basement to the north.
Part 2 of the thesis deals with the three-dimensional velocity structure of southern California. In Chapter 7, an a priori three-dimensional crustal velocity model is constructed by partitioning southern California into geologic provinces, with each province having a consistent one-dimensional velocity structure. The one-dimensional velocity structures of each region were then assembled into a three-dimensional model. The three-dimension model was calibrated by forward modeling of explosion travel times.
In Chapter 8, the three-dimensional velocity model is used to locate earthquakes. For about 1000 earthquakes relocated in the Los Angeles basin, the three-dimensional model has a variance of the the travel time residuals 47 per cent less than the catalog locations found using a standard one-dimensional velocity model. Other than the 1987 Whittier earthquake sequence, little correspondence is seen between these earthquake locations and elements of a recent structural cross section of the Los Angeles basin. The Whittier sequence involved rupture of a north dipping thrust fault bounded on at least one side by a strike-slip fault. The 1988 Pasadena earthquake was deep left-lateral event on the Raymond fault. The 1989 Montebello earthquake was a thrust event on a structure similar to that on which the Whittier earthquake occurred. The 1989 Malibu earthquake was a thrust or oblique slip event adjacent to the 1979 Malibu earthquake.
At least two of the largest recent thrust earthquakes (San Fernando and Whittier) in the Los Angeles basin have had the extent of their thrust plane ruptures limited by strike-slip faults. This suggests that the buried thrust faults underlying the Los Angeles basin are segmented by strike-slip faults.
Earthquake and explosion travel times are inverted for the three-dimensional velocity structure of southern California in Chapter 9. The inversion reduced the variance of the travel time residuals by 47 per cent compared to the starting model, a reparameterized version of the forward model of Chapter 7. The Los Angeles basin is well resolved, with seismically slow sediments atop a crust of granitic velocities. Moho depth is between 26 and 32 km.
Resumo:
Fuzzy sets in the subject space are transformed to fuzzy solid sets in an increased object space on the basis of the development of the local umbra concept. Further, a counting transform is defined for reconstructing the fuzzy sets from the fuzzy solid sets, and the dilation and erosion operators in mathematical morphology are redefined in the fuzzy solid-set space. The algebraic structures of fuzzy solid sets can lead not only to fuzzy logic but also to arithmetic operations. Thus a fuzzy solid-set image algebra of two image transforms and five set operators is defined that can formulate binary and gray-scale morphological image-processing functions consisting of dilation, erosion, intersection, union, complement, addition, subtraction, and reflection in a unified form. A cellular set-logic array architecture is suggested for executing this image algebra. The optical implementation of the architecture, based on area coding of gray-scale values, is demonstrated. (C) 1995 Optical Society of America
Resumo:
The problem in this investigation was to determine the stress and deflection patterns of a thick cantilever plate at various angles of sweepback.
The plate was tested at angles of sweepback of zero, twenty, forty, and sixty degrees under uniform shear load at the tip, uniformly distributed load and torsional loading.
For all angles of sweep and for all types of loading the area of critical stress is near the intersection of the root and trailing edge. Stresses near the leading edge at the root decreased rapidly with increase in angle of sweep for all types of loading. In the outer portion of the plate near the trailing edge the stresses due to the uniform shear and the uniformly distributed load did not vary for angles of sweep up to forty degrees. For the uniform shear and the uniformly distributed loads for all angles of sweep the area in which end effect is pronounced extends from the root to approximately three quarters of a chord length outboard of a line perpendicular to the axis of the plate through the trailing edge root. In case of uniform shear and uniformly distributed loads the deflections near the edge at seventy-five per cent semi-span decreased with increase in angle of sweep. Deflections near the trailing edge under the same loading conditions increased with increase in angle of sweep for small angles and then decreased at the higher angles of sweep. The maximum deflection due to torsional loading increased with increase in angle of sweep.
Resumo:
Este estudo investigou a implementação da Política Nacional de Educação Permanente da Saúde (PNEPS) no Estado do Rio de Janeiro, durante o ano de 2006. A PNEPS, fundamentalmente, visa mudança das práticas de saúde por meio da educação permanente em serviço pela problematização do cotidiano do trabalho em saúde. No percurso da descentralização, preconizada tanto pelo Sistema Único de Saúde como pela PNEPS, o território de eleição para a investigação foi o do Município de Teresópolis, apresentado segundo os parâmetros utilizados para o cálculo do Índice de Desenvolvimento Humano. A pesquisa se concentrou na Secretaria Municipal de Saúde e nas rodas de consenso do Pólo de Educação Permanente em Saúde da Região Serrana do Estado do Rio de Janeiro. A metodologia utilizou a triangulação de dados procedentes das técnicas da observação participante, da consulta a fontes documentais pertinentes e de dez entrevistas semi-estruturadas feitas com gestores da Secretaria Municipal de Saúde. O material narrativo, das entrevistas, foi transcrito e submetido à análise do discurso. O campo aportou dados inusitados para a análise da implementação da PNEPS. Tanto a prefeitura como a UNIFESO compartilham da mesma liderança política, com repercussões na gestão do Sistema Único de Saúde e na educação formal em saúde. Embora o Programa de Saúde da Família gere demandas para a PNEPS, o cruzamento e a sobrecarga das ações assistenciais com as educativas, nesta instância, mediadas pelo mesmo profissional da saúde, também preceptor da UNIFESO, trazem repercussões para ambas as iniciativas. Principalmente, obstaculizam propostas educativas para as demandas de trabalhadores e de usuários. Finalmente, no que concerne às políticas públicas, o estudo demonstrou a presença do modelo centro-periferia em escala municipal, à semelhança daquele de dimensões mundial e federal, expresso pela descentralização de ações com centralização de recursos.
Resumo:
O presente trabalho procurou analisar a inserção da psicanálise nas novas formas de cuidado terapêutico em perinatologia, mais precisamente, no domínio que envolve os acontecimentos que ocorrem entre a concepção e os 36 meses de vida da criança. Para tanto, inicialmente foi apresentada a área da saúde materno-infantil no Brasil e as políticas públicas que a sustentam. Em seguida, delineou-se o funcionamento do campo escolhido, no caso uma maternidade de alto risco. Tendo em vista, a construção de uma rede de atenção tecida a partir de diferentes olhares, se procurou enfocar os impasses da interseção entre o discurso biomédico, o da educação em saúde e o da psicanálise. Nesse ponto, foi utilizada como referência principal a contribuição de D. W. Winnicott sobre a teoria do amadurecimento pessoal. Com a finalidade de circunscrever o crescente interesse pela primeira infância, procedeu-se a um mapeamento do estudo psicanalítico dos primórdios do psiquismo, após uma breve incursão pelo texto freudiano. Promoveu-se ainda uma discussão sobre o encontro das hipóteses psicanalíticas com as novas descobertas científicas sobre as potencialidades do bebê, ressaltando as consequências possíveis de tal intercâmbio. Por fim, foram destacadas algumas concepções que fundamentam a importância do olhar psicanalítico para o cuidado integral à saúde materno-infantil, enfatizando autores como Lebovici, Cramer, Bydlowski e Golse. Aqui as discussões teóricas entrelaçaram-se com observações de campo e vinhetas clínicas
Resumo:
Esta dissertação estuda o papel do sujeito na literatura e sua relação com a cultura e alteridade através da análise de duas obras: Nove noites, de Bernardo de Carvalho e Coração das Trevas de Joseph Conrad. As obras estudadas mostram a crise que atinge os protagonistas dos dois livros depois do encontro com outras culturas. Em Nove noites o outro é representado pelo índio e em Coração das Trevas pelos africanos. Em Nove noites o antropólogo Buell Quain se suicida depois de uma estada entre os índios Krahô, e em Coração das Trevas vemos a deterioração do homem branco representada pelo personagem de Kurtz. Considerado um homem notável e um altruísta na Europa, Kurtz teria se corrompido no contato com a realidade do Congo e se torna, nas palavras do narrador Marlow, um dos demônios da terra. A dissolução da personalidade e código moral do homem branco, representada pelos dois personagens, será estudada analisando a relação entre personalidade e cultura e como a falta de apoio e controle grupal desarticula valores até então considerados estáveis, assim como o contato com o outro. Esta desarticulação do sujeito causada pelo choque cultural se soma à crise geral do sujeito moderno e ao mal-estar na civilização, como descrito por Freud. A posição paradoxal do antropólogo, que se situa entre duas culturas, faz parte desta análise, do mesmo modo questões pertinentes a posição dos índios e africanos no Congo. No caso específico de Coração das Trevas trabalha-se a interseção entre a análise do sujeito, e suas implicações, e a construção do personagem de Kurtz como símbolo da violência colonial. O trabalho analisa também as semelhanças entre as duas obras, tanto temáticas como em suas técnicas narrativas e a influência da obra de Conrad nos romances de Carvalho
Resumo:
Esta dissertação busca contribuir para o ensino da língua materna promovendo uma análise linguístico - discursiva organizada em níveis das produções que compõem o corpus investigado. Considerando um universo de 375 produções discentes, propõe-se uma análise de 37 produções textuais de alunos dos Colégios Militares do Rio de Janeiro, Fortaleza, Porto Alegre e Campo Grande, a fim de, com base no aporte teórico que estabelece a interseção da concepção tridimensional do discurso de Fairclough e as regularidades discursivas de Foucault, sejam propostos níveis de letramento da escrita de alunos do 6 ano do Ensino Fundamental. A proposta metodológica deste estudo baseia-se em duas categorias de análise, a saber: uma categoria discursiva e outra gramatical. A primeira, subdividida em intratexto e intertexto, aponta para a construção de sentido e para o posicionamento do aluno como sujeito. A segunda, subdividida em coesão referencial e coesão sequencial, indica a importância dos elementos gramaticais na sustentabilidade e desenvolvimento do texto. Em cada uma dessas categorias foram levantadas regularidades discursivas que, interseccionadas, retratam os diferentes níveis de proficiência leitora e escritora presentes num mesmo grupo, resultantes dos diferentes níveis de letramento, além de indicar para o professor os aspectos linguísticos que precisam ser desenvolvidos numa sala de aula