866 resultados para Limitation of Actions
Resumo:
A dissertação trata do acesso aos serviços de alta complexidade, particularmente os exames diagnósticos e complementares, estudado entre usuários de planos de saúde privados que buscam atendimento e diagnóstico especializado. Desde a década de 80 o usuário do sistema público de saúde vem procurando a saúde suplementar. Contudo, afirmar que o acesso é garantido no domínio privado, através da contratação dos planos de saúde, é uma incerteza que rodeia a inspiração para esta pesquisa, que se justifica pela relevância de ações que possibilitem a melhora da qualidade regulatória dos planos de saúde, a partir do controle social de seus usuários. O objetivo geral é analisar as percepções do acesso aos exames de alta complexidade nos serviços de saúde privados entre usuários de planos de saúde. Os objetivos específicos são descrever as percepções dos usuários de planos de saúde acerca do acesso aos exames de alta complexidade; analisar as motivações dos usuários de planos de saúde privados para a realização de exames de alta complexidade através da rede privada de assistência; e analisar o nível de satisfação dos usuários de planos de saúde quanto ao acesso aos exames de alta complexidade. A metodologia é qualitativa-descritiva, onde a amostra foi de trinta usuários de planos de saúde, acima de 18 anos, selecionados no campo de estudo no ano de 2010. O cenário de estudo foi um laboratório privado de medicina diagnóstica no Rio de Janeiro. As técnicas de coleta de dados utilizadas foram formulário e entrevista individual estruturada. A análise do formulário foi realizada através de estatística descritiva, e as entrevistas através da análise de conteúdo temática-categorial. Os usuários de plano de saúde declararam que o acesso é garantido com facilidade para os exames de alta complexidade. Suas principais motivações para a realização desses exames na rede privada de assistência foram caracterizadas pela rapidez de atendimento, flexibilidade e facilidade de marcação pela internet, telefone ou pessoalmente no laboratório estudado, pronta entrega dos resultados, dificuldade e morosidade do atendimento do SUS, localização do prestador credenciado próxima de bairros residenciais ou do trabalho, resolutividade diagnóstica de imagem de excelência, possibilidade de escolha pelo usuário entre as modalidades aberta e fechada de ressonância magnética e tomografia computadorizada, além da densitometria óssea que foram facilmente acessíveis a todos os sujeitos da pesquisa. O nível de satisfação foi correspondido com a rapidez na realização dos exames em caráter eletivo e de urgência quase equiparados na escala de tempo de acordo com os usuários. Contudo, embora as notas de avaliação dos usuários quanto aos seus planos de saúde tenham sido altas, foram abordadas algumas dificuldades, tais como: prazos de validade dos pedidos médicos com datação prévia; solicitações de senhas de autorização pela operadora; burocracia nos procedimentos de agendamento; dificuldades de acesso para tratamentos como implantes, fisioterapia, RPG, pilates, home care, consultas de check up; negação de reembolsos; restrição de materiais cirúrgicos, em especial as próteses e órteses; e restrições específicas de grau para cirurgias de miopia. Conclui-se que o atendimento rápido dos exames de imagem de alto custo na amostra foi descrito como satisfatório, embora a percepção de rapidez possa variar em função do tipo de produto do plano de saúde privado contratado, com necessidade de melhoria regulatória em alguns aspectos pontuais da saúde suplementar.
Resumo:
Many applications in cosmology and astrophysics at millimeter wavelengths including CMB polarization, studies of galaxy clusters using the Sunyaev-Zeldovich effect (SZE), and studies of star formation at high redshift and in our local universe and our galaxy, require large-format arrays of millimeter-wave detectors. Feedhorn and phased-array antenna architectures for receiving mm-wave light present numerous advantages for control of systematics, for simultaneous coverage of both polarizations and/or multiple spectral bands, and for preserving the coherent nature of the incoming light. This enables the application of many traditional "RF" structures such as hybrids, switches, and lumped-element or microstrip band-defining filters.
Simultaneously, kinetic inductance detectors (KIDs) using high-resistivity materials like titanium nitride are an attractive sensor option for large-format arrays because they are highly multiplexable and because they can have sensitivities reaching the condition of background-limited detection. A KID is a LC resonator. Its inductance includes the geometric inductance and kinetic inductance of the inductor in the superconducting phase. A photon absorbed by the superconductor breaks a Cooper pair into normal-state electrons and perturbs its kinetic inductance, rendering it a detector of light. The responsivity of KID is given by the fractional frequency shift of the LC resonator per unit optical power.
However, coupling these types of optical reception elements to KIDs is a challenge because of the impedance mismatch between the microstrip transmission line exiting these architectures and the high resistivity of titanium nitride. Mitigating direct absorption of light through free space coupling to the inductor of KID is another challenge. We present a detailed titanium nitride KID design that addresses these challenges. The KID inductor is capacitively coupled to the microstrip in such a way as to form a lossy termination without creating an impedance mismatch. A parallel plate capacitor design mitigates direct absorption, uses hydrogenated amorphous silicon, and yields acceptable noise. We show that the optimized design can yield expected sensitivities very close to the fundamental limit for a long wavelength imager (LWCam) that covers six spectral bands from 90 to 400 GHz for SZE studies.
Excess phase (frequency) noise has been observed in KID and is very likely caused by two-level systems (TLS) in dielectric materials. The TLS hypothesis is supported by the measured dependence of the noise on resonator internal power and temperature. However, there is still a lack of a unified microscopic theory which can quantitatively model the properties of the TLS noise. In this thesis we derive the noise power spectral density due to the coupling of TLS with phonon bath based on an existing model and compare the theoretical predictions about power and temperature dependences with experimental data. We discuss the limitation of such a model and propose the direction for future study.
Resumo:
A proposta da presente pesquisa teve como objetivo analisar criticamente as melhores práticas de recrutamento e de seleção, direcionadas a pessoas com deficiência, candidatas a emprego em cinco empresas privadas, na cidade do Rio de Janeiro, que possuem em seu quadro de funcionários mais de 100 empregados, sendo assim, obrigadas ao cumprimento da Lei n 8.213/1991, a Lei de Cotas. Analisou-se também o modo como profissionais de Recursos Humanos adquiriram conhecimentos técnicos acerca destas práticas. Parte dos objetivos desta proposta de pesquisa foi identificar as bases desta determinação legal, no que se refere ao amparo técnico aos profissionais de RH no processo seletivo. Neste aspecto, o foco da investigação foi verificar a existência de programas de qualificação para estes profissionais, tendo em vista que a exigência de capacitação está sempre centrada na pessoa com deficiência, quando, na verdade, a carência está presente também nos responsáveis que lidam com este público, por ocasião do seu ingresso nas organizações corporativas. A abordagem metodológica incluiu uma pesquisa de campo com base em dados de entrevistas semi-estruturadas, sendo complementada pela técnica de análise de relato verbal. Seis foram os profissionais de RH escolhidos como participantes da pesquisa e que atuam diretamente na área de recrutamento e de seleção de pessoas com deficiência. Inevitavelmente, estes profissionais de RH se utilizam de instrumentos psicométricos dentre outros, cotidianamente empregados no processo seletivo, inclusive na avaliação de pessoas com deficiência. Os resultados da presente pesquisa apontam que as melhores práticas de recrutamento e de seleção, atualmente em uso, direcionadas a pessoas com deficiência, são discriminatórias, pois os profissionais envolvidos neste processo, por demonstrarem falta de conhecimento acerca de práticas apropriadas, se utilizam dos mesmos procedimentos adotados no atendimento de vagas para o público de pessoas ditas normais. Complementarmente, a revisão da literatura aponta a inexistência de amparo técnico e científico, no sentido de qualificar profissionais responsáveis pelo ingresso e permanência de pessoas com deficiência no mercado de trabalho, confirmando-se, assim, a limitação da ação de política pública em vigor. Por conta desse fato, propõe-se a adoção de ações afirmativas, neste caso de órgãos privados, no sentido de mobilizar esforços em prol da contratação de grupos socialmente excluídos no mercado de trabalho, como é o caso das pessoas com deficiência.
Resumo:
Professionals who are responsible for coastal environmental and natural resource planning and management have a need to become conversant with new concepts designed to provide quantitative measures of the environmental benefits of natural resources. These amenities range from beaches to wetlands to clean water and other assets that normally are not bought and sold in everyday markets. At all levels of government — from federal agencies to townships and counties — decisionmakers are being asked to account for the costs and benefits of proposed actions. To non-specialists, the tools of professional economists are often poorly understood and sometimes inappropriate for the problem at hand. This handbook is intended to bridge this gap. The most widely used organizing tool for dealing with natural and environmental resource choices is benefit-cost analysis — it offers a convenient way to carefully identify and array, quantitatively if possible, the major costs, benefits, and consequences of a proposed policy or regulation. The major strength of benefit-cost analysis is not necessarily the predicted outcome, which depends upon assumptions and techniques, but the process itself, which forces an approach to decision-making that is based largely on rigorous and quantitative reasoning. However, a major shortfall of benefit-cost analysis has been the difficulty of quantifying both benefits and costs of actions that impact environmental assets not normally, nor even regularly, bought and sold in markets. Failure to account for these assets, to omit them from the benefit-cost equation, could seriously bias decisionmaking, often to the detriment of the environment. Economists and other social scientists have put a great deal of effort into addressing this shortcoming by developing techniques to quantify these non-market benefits. The major focus of this handbook is on introducing and illustrating concepts of environmental valuation, among them Travel Cost models and Contingent Valuation. These concepts, combined with advances in natural sciences that allow us to better understand how changes in the natural environment influence human behavior, aim to address some of the more serious shortcomings in the application of economic analysis to natural resource and environmental management and policy analysis. Because the handbook is intended for non-economists, it addresses basic concepts of economic value such as willingness-to-pay and other tools often used in decision making such as costeffectiveness analysis, economic impact analysis, and sustainable development. A number of regionally oriented case studies are included to illustrate the practical application of these concepts and techniques.
Resumo:
Professionals who are responsible for coastal environmental and natural resource planning and management have a need to become conversant with new concepts designed to provide quantitative measures of the environmental benefits of natural resources. These amenities range from beaches to wetlands to clean water and other assets that normally are not bought and sold in everyday markets. At all levels of government — from federal agencies to townships and counties — decisionmakers are being asked to account for the costs and benefits of proposed actions. To non-specialists, the tools of professional economists are often poorly understood and sometimes inappropriate for the problem at hand. This handbook is intended to bridge this gap. The most widely used organizing tool for dealing with natural and environmental resource choices is benefit-cost analysis — it offers a convenient way to carefully identify and array, quantitatively if possible, the major costs, benefits, and consequences of a proposed policy or regulation. The major strength of benefit-cost analysis is not necessarily the predicted outcome, which depends upon assumptions and techniques, but the process itself, which forces an approach to decision-making that is based largely on rigorous and quantitative reasoning. However, a major shortfall of benefit-cost analysis has been the difficulty of quantifying both benefits and costs of actions that impact environmental assets not normally, nor even regularly, bought and sold in markets. Failure to account for these assets, to omit them from the benefit-cost equation, could seriously bias decisionmaking, often to the detriment of the environment. Economists and other social scientists have put a great deal of effort into addressing this shortcoming by developing techniques to quantify these non-market benefits. The major focus of this handbook is on introducing and illustrating concepts of environmental valuation, among them Travel Cost models and Contingent Valuation. These concepts, combined with advances in natural sciences that allow us to better understand how changes in the natural environment influence human behavior, aim to address some of the more serious shortcomings in the application of economic analysis to natural resource and environmental management and policy analysis. Because the handbook is intended for non-economists, it addresses basic concepts of economic value such as willingness-to-pay and other tools often used in decision making such as costeffectiveness analysis, economic impact analysis, and sustainable development. A number of regionally oriented case studies are included to illustrate the practical application of these concepts and techniques.
Resumo:
Karenia brevis is the dominant toxic red tide algal species in the Gulf of Mexico. It produces potent neurotoxins (brevetoxins [PbTxs]), which negatively impact human and animal health, local economies, and ecosystem function. Field measurements have shown that cellular brevetoxin contents vary from 1–68 pg/cell but the source of this variability is uncertain. Increases in cellular toxicity caused by nutrient-limitation and inter-strain differences have been observed in many algal species. This study examined the effect of P-limitation of growth rate on cellular toxin concentrations in five Karenia brevis strains from different geographic locations. Phosphorous was selected because of evidence for regional P-limitation of algal growth in the Gulf of Mexico. Depending on the isolate, P-limited cells had 2.3- to 7.3-fold higher PbTx per cell than P-replete cells. The percent of cellular carbon associated with brevetoxins (%C-PbTx) was ~ 0.7 to 2.1% in P-replete cells, but increased to 1.6–5% under P-limitation. Because PbTxs are potent anti-grazing compounds, this increased investment in PbTxs should enhance cellular survival during periods of nutrient-limited growth. The %C-PbTx was inversely related to the specific growth rate in both the nutrient-replete and P-limited cultures of all strains. This inverse relationship is consistent with an evolutionary tradeoff between carbon investment in PbTxs and other grazing defenses, and C investment in growth and reproduction. In aquatic environments where nutrient supply and grazing pressure often vary on different temporal and spatial scales, this tradeoff would be selectively advantageous as it would result in increased net population growth rates. The variation in PbTx/cell values observed in this study can account for the range of values observed in the field, including the highest values, which are not observed under N-limitation. These results suggest P-limitation is an important factor regulating cellular toxicity and adverse impacts during at least some K. brevis blooms.
Resumo:
Design knowledge can be acquired from various sources and generally requires an integrated representation for its effective and efficient re-use. Though knowledge about products and processes can illustrate the solutions created (know-what) and the courses of actions (know-how) involved in their creation, the reasoning process (know-why) underlying the solutions and actions is still needed for an integrated representation of design knowledge. Design rationale is an effective way of capturing that missing part, since it records the issues addressed, the options considered, and the arguments used when specific design solutions are created and evaluated. Apart from the need for an integrated representation, effective retrieval methods are also of great importance for the re-use of design knowledge, as the knowledge involved in designing complex products can be huge. Developing methods for the retrieval of design rationale is very useful as part of the effective management of design knowledge, for the following reasons. Firstly, design engineers tend to want to consider issues and solutions before looking at solid models or process specifications in detail. Secondly, design rationale is mainly described using text, which often embodies much relevant design knowledge. Last but not least, design rationale is generally captured by identifying elements and their dependencies, i.e. in a structured way which opens the opportunity for going beyond simple keyword-based searching. In this paper, the management of design rationale for the re-use of design knowledge is presented. The retrieval of design rationale records in particular is discussed in detail. As evidenced in the development and evaluation, the methods proposed are useful for the re-use of design knowledge and can be generalised to be used for the retrieval of other kinds of structured design knowledge. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
A 3-D model of a superconducting staggered array undulator has been built, which could serve as a powerful tool to solve electromagnetic problems and to realize field optimization of such design. Given the limitation of 2-D simulation for irregular shapes and complex geometries, 3-D models are more desirable for a comprehensive investigation. An optimization method for the undulator peak field is proposed; up to 32% enhancement can be achieved by introducing major segment bulks. Some improvements of the undulator design are obtained by careful analyzing of the simulation results. © 2002-2011 IEEE.
Resumo:
In China, especially in Three-Gorges Reservoir, our knowledge of the algal growth potential and nutrient limitation was still limited. In the spring of 2006, the water column ratios of total nitrogen/total phosphorus were investigated and algal bioassays performed to determine algal growth potential of waters and nutrient limitation of mainstream and Xiangxi Bay of Three-Gorges Reservoir. The results showed sampling sites in mainstream were co-limited by N and P or P-limited alone, and sites in Xiangxi Bay were N-limited alone. Fe likely played an important role in determining the appearance and disappearance of algal blooms of Three-Gorges Reservoir. Native algae, Pseudokirchneriella subcapitata and Cyclotella meneghiniana, had high growth potential in Three-Gorges Reservoir.
Resumo:
This thesis focuses on the modelling of settlement induced damage to masonry buildings. In densely populated areas, the need for new space is nowadays producing a rapid increment of underground excavations. Due to the construction of new metro lines, tunnelling activity in urban areas is growing. One of the consequences is a greater attention to the risk of damage on existing structures. Thus, the assessment of potential damage of surface buildings has become an essential stage in the excavation projects in urban areas (Chapter 1). The current damage risk assessment procedure is based on strong simplifications, which not always lead to conservative results. Object of this thesis is the development of an improved damage classification system, which takes into account the parameters influencing the structural response to settlement, like the non-linear behaviour of masonry and the soil-structure interaction. The methodology used in this research is based on experimental and numerical modelling. The design and execution of an experimental benchmark test representative of the problem allows to identify the principal factors and mechanisms involved. The numerical simulations enable to generalize the results to a broader range of physical scenarios. The methodological choice is based on a critical review of the currently available procedures for the assessment of settlement-induced building damage (Chapter 2). A new experimental test on a 1/10th masonry façade with a rubber base interface is specifically designed to investigate the effect of soil-structure interaction on the tunnelling-induced damage (Chapter 3). The experimental results are used to validate a 2D semi-coupled finite element model for the simulation of the structural response (Chapter 4). The numerical approach, which includes a continuum cracking model for the masonry and a non-linear interface to simulate the soil-structure interaction, is then used to perform a sensitivity study on the effect of openings, material properties, initial damage, initial conditions, normal and shear behaviour of the base interface and applied settlement profile (Chapter 5). The results assess quantitatively the major role played by the normal stiffness of the soil-structure interaction and by the material parameters defining the quasi-brittle masonry behaviour. The limitation of the 2D modelling approach in simulating the progressive 3D displacement field induced by the excavation and the consequent torsional response of the building are overcome by the development of a 3D coupled model of building, foundation, soil and tunnel (Chapter 6). Following the same method applied to the 2D semi-coupled approach, the 3D model is validated through comparison with the monitoring data of a literature case study. The model is then used to carry out a series of parametric analyses on geometrical factors: the aspect ratio of horizontal building dimensions with respect to the tunnel axis direction, the presence of adjacent structures and the position and alignment of the building with respect to the excavation (Chapter 7). The results show the governing effect of the 3D building response, proving the relevance of 3D modelling. Finally, the results from the 2D and 3D parametric analyses are used to set the framework of an overall damage model which correlates the analysed structural features with the risk for the building of being damaged by a certain settlement (Chapter 8). This research therefore provides an increased experimental and numerical understanding of the building response to excavation-induced settlements, and sets the basis for an operational tool for the risk assessment of structural damage (Chapter 9).
Resumo:
Qubit measurement by mesoscopic charge detectors has received great interest in the community of mesoscopic transport and solid-state quantum computation, and some controversial issues still remain unresolved. In this work, we revisit the continuous weak measurement of a solid-state qubit by single electron transistors (SETs) in nonlinear-response regime. For two SET models typically used in the literature, we find that the signal-to-noise ratio can violate the universal upper bound "4," which is imposed quantum mechanically on linear-response detectors. This different result can be understood by means of the cross correlation of the detector currents by viewing the two junctions of the single SET as two detectors. Possible limitation of the potential-scattering approach to this result is also discussed.
Resumo:
Bessel beam can overcome the limitation of the Rayleigh range of Gaussian beam with the same spot size propagation without any spreading due to diffraction, which is considered as an useful function in guiding particles in the next generation of optical tweezers. The mathematical description of the Bessel beam generated by an axicon is usually based on the Fresnel diffraction integral theory. In this paper, we deduce another type of analytic expression suitable for describing the beam profile generated from the axicon illuminated by the Gaussian beam based on the interferential theory. Compared with the Fresnel diffraction integral theory, this theory does not use much approximation in the process of mathematical analysis. According to the derived expression, the beam intensity profiles at any positions behind the axicon can be calculated not just restricted inside the cross region as the Fresnel diffraction integral theory gives. The experiments prove that the theoretical results fit the experimental results very well. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
To avoid the limitation of the widely used prediction methods of soil organic carbon partition coefficients (K-OC) from hydrophobic parameters, e.g., the n-octanol/water partition coefficients (K-OW) and the reversed phase high performance liquid chromatographic (RP-HPLC) retention factors, the soil column liquid chromatographic (SCLC) method was developed for K-OC prediction. The real soils were used as the packing materials of RP-HPLC columns, and the correlations between the retention factors of organic compounds on soil columns (k(soil)) and K-OC measured by batch equilibrium method were studied. Good correlations were achieved between k(soil) and K-OC for three types of soils with different properties. All the square of the correlation coefficients (R-2) of the linear regression between log k(soi) and log K-OC were higher than 0.89 with standard deviations of less than 0.21. In addition, the prediction of K-OC from K-OW and the RP-HPLC retention factors on cyanopropyl (CN) stationary phase (k(CN)) was comparatively evaluated for the three types of soils. The results show that the prediction of K-OC from k(CN) and K-OW is only applicable to some specific types of soils. The results obtained in the present study proved that the SCLC method is appropriate for the K-OC prediction for different types of soils, however the applicability of using hydrophobic parameters to predict K-OC largely depends on the properties of soil concerned. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
We have investigated the hole nucleation and growth induced by crystallization of thin crystalline-coil diblock copolymer films. Semicrystalline rodlike assemblies from neutral/selective binary solvent are used as seeds to nucleate crystallization at temperatures above the glass transition temperature (T-g) but below melting point (T-m). The crystallization of nanorods drives neighboring copolymer chains to diffuse into the growing nanorods. Depletion of copolymer chains yields hole nucleation and growth at the edge of the nanorods. Simultaneously, the polymer chains unassociated into the nanorods were oriented by induction from the free surface and the substrate, leading to limitation of the hole depth to the lamellar spacing, similar to20 nm. The holes, as well as the nanorods, grow as t(alpha), where t is the annealing time and a crossover in the exponent a. is found. The orientation and stretching of the copolymer chains by the surface and interface are believed to accelerate the crystallization, and in turn, the latter accelerates the growth rate of the holes. At T > T-m, the grains melt and the copolymer chains relax and flow into the first layer of the film.
Resumo:
This paper deals with the evaluation of the reliability of the analytical results obtained by Kalman filtering. Two criteria for evaluation were compared: one is based on the autocorrelation analysis of the innovation sequence, the so-called NAC criterion; the other is the innovations number, which actually is the autocorrelation coefficient of the innovation sequence at the initial wavelength. Both criteria allow compensation for the wavelength positioning errors in spectral scans, but there exists a difference in the way they work. The NAC criterion can provide information about the reliability of an individual result, which is very useful for the indication of unmodelled emissions, while the innovations number should be incorporated with the normalization of the innovations or seek the help of the sequence itself for the same purpose. The major limitation of the NAC criterion is that it does not allow the theoretical modelling of continuous backgrounds, which, however, is convenient in practical analysis and can be taken with the innovations number criterion.