979 resultados para Diagnostic reference level


Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: To investigate the relationship between hemoglobin (Hgb) and brain tissue oxygen tension (PbtO(2)) after severe traumatic brain injury (TBI) and to examine its impact on outcome. METHODS: This was a retrospective analysis of a prospective cohort of severe TBI patients whose PbtO(2) was monitored. The relationship between Hgb-categorized into four quartiles (≤9; 9-10; 10.1-11; >11 g/dl)-and PbtO(2) was analyzed using mixed-effects models. Anemia with compromised PbtO(2) was defined as episodes of Hgb ≤ 9 g/dl with simultaneous PbtO(2) < 20 mmHg. Outcome was assessed at 30 days using the Glasgow outcome score (GOS), dichotomized as favorable (GOS 4-5) vs. unfavorable (GOS 1-3). RESULTS: We analyzed 474 simultaneous Hgb and PbtO(2) samples from 80 patients (mean age 44 ± 20 years, median GCS 4 (3-7)). Using Hgb > 11 g/dl as the reference level, and controlling for important physiologic covariates (CPP, PaO(2), PaCO(2)), Hgb ≤ 9 g/dl was the only Hgb level that was associated with lower PbtO(2) (coefficient -6.53 (95 % CI -9.13; -3.94), p < 0.001). Anemia with simultaneous PbtO(2) < 20 mmHg, but not anemia alone, increased the risk of unfavorable outcome (odds ratio 6.24 (95 % CI 1.61; 24.22), p = 0.008), controlling for age, GCS, Marshall CT grade, and APACHE II score. CONCLUSIONS: In this cohort of severe TBI patients whose PbtO(2) was monitored, a Hgb level no greater than 9 g/dl was associated with compromised PbtO(2). Anemia with simultaneous compromised PbtO(2), but not anemia alone, was a risk factor for unfavorable outcome, irrespective of injury severity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of the thesis is to structure and model the factors that contribute to and can be used in evaluating project success. The purpose of this thesis is to enhance the understanding of three research topics. The goal setting process, success evaluation and decision-making process are studied in the context of a project, business unitand its business environment. To achieve the objective three research questionsare posed. These are 1) how to set measurable project goals, 2) how to evaluateproject success and 3) how to affect project success with managerial decisions.The main theoretical contribution comes from deriving a synthesis of these research topics which have mostly been discussed apart from each other in prior research. The research strategy of the study has features from at least the constructive, nomothetical, and decision-oriented research approaches. This strategy guides the theoretical and empirical part of the study. Relevant concepts and a framework are composed on the basis of the prior research contributions within the problem area. A literature review is used to derive constructs of factors withinthe framework. They are related to project goal setting, success evaluation, and decision making. On the basis of this, the case study method is applied to complement the framework. The empirical data includes one product development program, three construction projects, as well as one organization development, hardware/software, and marketing project in their contexts. In two of the case studiesthe analytic hierarchy process is used to formulate a hierarchical model that returns a numerical evaluation of the degree of project success. It has its origin in the solution idea which in turn has its foundation in the notion of projectsuccess. The achieved results are condensed in the form of a process model thatintegrates project goal setting, success evaluation and decision making. The process of project goal setting is analysed as a part of an open system that includes a project, the business unit and its competitive environment. Four main constructs of factors are suggested. First, the project characteristics and requirements are clarified. The second and the third construct comprise the components of client/market segment attractiveness and sources of competitive advantage. Together they determine the competitive position of a business unit. Fourth, the relevant goals and the situation of a business unit are clarified to stress their contribution to the project goals. Empirical evidence is gained on the exploitation of increased knowledge and on the reaction to changes in the business environment during a project to ensure project success. The relevance of a successful project to a company or a business unit tends to increase the higher the reference level of project goals is set. However, normal performance or sometimes performance below this normal level is intentionally accepted. Success measures make project success quantifiable. There are result-oriented, process-oriented and resource-oriented success measures. The study also links result measurements to enablers that portray the key processes. The success measures can be classified into success domains determining the areas on which success is assessed. Empiricalevidence is gained on six success domains: strategy, project implementation, product, stakeholder relationships, learning situation and company functions. However, some project goals, like safety, can be assessed using success measures that belong to two success domains. For example a safety index is used for assessing occupational safety during a project, which is related to project implementation. Product safety requirements, in turn, are connected to the product characteristics and thus to the product-related success domain. Strategic success measures can be used to weave the project phases together. Empirical evidence on their static nature is gained. In order-oriented projects the project phases are oftencontractually divided into different suppliers or contractors. A project from the supplier's perspective can represent only a part of the ¿whole project¿ viewed from the client's perspective. Therefore static success measures are mostly used within the contractually agreed project scope and duration. Proof is also acquired on the dynamic use of operational success measures. They help to focus on the key issues during each project phase. Furthermore, it is shown that the original success domains and success measures, their weights and target values can change dynamically. New success measures can replace the old ones to correspond better with the emphasis of the particular project phase. This adjustment concentrates on the key decision milestones. As a conclusion, the study suggests a combination of static and dynamic success measures. Their linkage to an incentive system can make the project management proactive, enable fast feedback and enhancethe motivation of the personnel. It is argued that the sequence of effective decisions is closely linked to the dynamic control of project success. According to the used definition, effective decisions aim at adequate decision quality and decision implementation. The findings support that project managers construct and use a chain of key decision milestones to evaluate and affect success during aproject. These milestones can be seen as a part of the business processes. Different managers prioritise the key decision milestones to a varying degree. Divergent managerial perspectives, power, responsibilities and involvement during a project offer some explanation for this. Finally, the study introduces the use ofHard Gate and Soft Gate decision milestones. The managers may use the former milestones to provide decision support on result measurements and ad hoc critical conditions. In the latter milestones they may make intermediate success evaluation also on the basis of other types of success measures, like process and resource measures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the aim of monitoring the dynamics of the Livingston Island ice cap, the Departament de Geodinàmica i Geofísica of the Universitat de Barcelona began ye a r ly surveys in the austral summer of 1994-95 on Johnsons Glacier. During this field campaign 10 shallow ice cores were sampled with a manual ve rtical ice-core drilling machine. The objectives were: i) to detect the tephra layer accumulated on the glacier surface, attributed to the 1970 Deception Island pyroclastic eruption, today interstratified; ii) to verify wheter this layer might serve as a reference level; iii) to measure the 1 3 7Cs radio-isotope concentration accumulated in the 1965 snow stratum; iv) to use the isochrone layer as a mean of verifying the age of the 1970 tephra layer; and, v) to calculate both the equilibrium line of the glacier and average mass balance over the last 28 years (1965-1993). The stratigr a p hy of the cores, their cumulative density curves and the isothermal ice temperatures recorded confi rm that Johnsons Glacier is a temperate glacier. Wi n d, solar radiation heating and liquid water are the main agents controlling the ve rtical and horizontal redistribution of the volcanic and cryoclastic particles that are sedimented and remain interstratified within the g l a c i e r. It is because of this redistribution that the 1970 tephra layer does not always serve as a ve ry good reference level. The position of the equilibrium line altitude (ELA) in 1993, obtained by the 1 3 7Cs spectrometric analysis, varies from about 200 m a.s.l. to 250 m a.s.l. This indicates a rising trend in the equilibrium line altitude from the beginning of the 1970s to the present day. The va rying slope orientation of Johnsons Glacier relative to the prevailing NE wind gives rise to large local differences in snow accumulation, which locally modifies the equilibrium line altitude. In the cores studied, 1 3 7Cs appears to be associated with the 1970 tephra laye r. This indicates an intense ablation episode throughout the sampled area (at least up to 330 m a.s.l), which probably occurred synchronically to the 1970 tephra deposition or later. A rough estimate of the specific mass balance reveals a considerable accumulation gradient related to the increase with altitude.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this paper is to discuss the trend of overvaluation of the Brazilian currency in the 2000s, presenting an econometric model to estimate the real exchange rate (RER) and which should be a reference level of the RER to guide long-term economic policy. In the econometric model, we consider long-term structural and short-term components, both of which may be responsible for explaining overvaluation trend of the Brazilian currency. Our econometric exercise confirms that the Brazilian currency had been persistently overvalued throughout almost all of the period under analysis, and we suggest that the long-term reference level of the real exchange rate was reached in 2004. In July 2014, the average nominal exchange rate should have been around 2.90 Brazilian reais per dollar (against an observed nominal rate of 2.22 Brazilian reais per dollar) to achieve the 2004 real reference level (average of the year). That is, according to our estimates, in July 2014 the Brazilian real was overvalued at 30.6 per cent in real terms relative to the reference level. Based on these findings we conclude the paper suggesting a mix of policy instruments that should have been used in order to reverse the overvaluation trend of the Brazilian real exchange rate, including a target for reaching a real exchange rate in the medium and the long-run which would favor resource allocation toward more technological intensive sectors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Malgré la préoccupation croissante des chercheurs et praticiens pour la santé psychologique au travail, le concept de bien-être vécu au travail est encore mal compris de la communauté scientifique. En effet, peu d’efforts ont été consacrés à ce jour pour développer des connaissances sur le bien-être psychologique au travail arrimées à la réalité des employés. Cette thèse a donc pour objectif de développer une conceptualisation du bien-être psychologique au travail et une instrumentation psychométriquement fiable lui étant rattachée. Pour ce faire, deux études ont été réalisées. La première, de nature qualitative et exploratoire, fut menée auprès de 20 travailleurs canadiens francophones afin de répertorier, à partir d’incidents critiques vécus par ceux-ci, des manifestations de bien-être psychologique au travail. Celles-ci ont pu être classifiées selon un modèle en 2 axes, soit la sphère de référence dans laquelle le bien-être psychologique au travail se vit et la directionnalité selon laquelle il se développe. Ce modèle a ensuite été comparé aux conceptualisations génériques du bien-être psychologique existantes, et cette analyse a permis d’étayer la validité convergente et divergente du modèle. Dans un deuxième temps, l’Indice de bien-être psychologique au travail (IBEPT) a été créé sur la base des manifestations relevées lors de l’étude qualitative, afin d’en assurer la validité de contenu. Une version expérimentale de l’instrument a ensuite été soumise à une expérimentation auprès de 1080 travailleurs québécois. Les analyses factorielles exploratoires révèlent une structure interne en 25 items reflétant 5 dimensions, représentant elles-mêmes un construit de second ordre. La validité de construit de cette conceptualisation a ensuite été étudiée par l’analyse des intercorrélations avec une série de mesures du bien-être et de la détresse psychologique génériques. Les résultats appuient la validité convergente de l’instrument, et démontrent également sa validité divergente. Enfin, l’instrument affiche une cohérence interne satisfaisante. Au terme de cette recherche doctorale, les résultats des deux études sont interprétés en fonction de l’état actuel des connaissances sur le bien-être psychologique, les limites des études sont énoncées, et des pistes de recherche future sont avancées.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The results of an investigation on the limits of the random errors contained in the basic data of Physical Oceanography and their propagation through the computational procedures are presented in this thesis. It also suggest a method which increases the reliability of the derived results. The thesis is presented in eight chapters including the introductory chapter. Chapter 2 discusses the general theory of errors that are relevant in the context of the propagation of errors in Physical Oceanographic computations. The error components contained in the independent oceanographic variables namely, temperature, salinity and depth are deliniated and quantified in chapter 3. Chapter 4 discusses and derives the magnitude of errors in the computation of the dependent oceanographic variables, density in situ, gt, specific volume and specific volume anomaly, due to the propagation of errors contained in the independent oceanographic variables. The errors propagated into the computed values of the derived quantities namely, dynamic depth and relative currents, have been estimated and presented chapter 5. Chapter 6 reviews the existing methods for the identification of level of no motion and suggests a method for the identification of a reliable zero reference level. Chapter 7 discusses the available methods for the extension of the zero reference level into shallow regions of the oceans and suggests a new method which is more reliable. A procedure of graphical smoothening of dynamic topographies between the error limits to provide more reliable results is also suggested in this chapter. Chapter 8 deals with the computation of the geostrophic current from these smoothened values of dynamic heights, with reference to the selected zero reference level. The summary and conclusion are also presented in this chapter.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introducción: Las enfermedades cardiovasculares son la causa de muerte más frecuente en el mundo desarrollado, la mayoría de éstas se relacionan con alteraciones de las arterias coronarias, sin embargo un subgrupo de pacientes presentan como causa de isquemia cardiaca alteraciones estructurales. Material y métodos: Estudio Descriptivo. Se utilizó la base de datos recolectada en un servicio de hemodinamia de Bogotá durante dos años. Se aplicaron criterios de inclusión y exclusión y se determinaron cuatro grupos etáreos, a todos los pacientes se les practicó cateterismo cardiaco diagnóstico. Las variables analizadas fueron: diagnóstico de referencia, antecedentes y resultados del cateterismo incluyendo presencia de anomalías estructurales como las valvulopatias, el origen anómalo de las coronarias y los puentes miocárdicos. Para el análisis descriptivo se utilizó reporte de prevalencias y para el análisis de asociaciones se utilizaron tablas de contingencia y el estadístico de prueba Chi cuadrado, no se realizó análisis multivariado debido a que no se encontraron asociaciones estadísticamente significativas. Resultados: La edad promedio de los pacientes fue de 62 años (DS= 10,5), la representación del género masculino fue del 61,7%, la prevalencia de angina estable fue del 61,6%, los 3 antecedentes más prevalentes fueron: hipertensión arterial (41,4%), la hiperlipidemia (19,1%) y la Diabetes Mellitus (17,7%). La prevalencia de las alteraciones estructurales en la población de estudio de manera general fue del 12,9%, y su distribución por tipo fue: 1,4% para puentes miocárdicos, 0,7% para origen anómalo de las arterias coronarias y 10,8% de enfermedad valvular. Conclusiones: Se encontró una asociación entre los antecedentes médicos y la presencia de valvulopatias cardiacas. Se evidenció que el género no tiene relación con la presencia de alteraciones cardíacas a pesar de la mayor participación de hombres en la población de estudio. Las limitantes de este estudio se relacionaron con el tamaño de muestra, debido a la baja prevalencia de las anomalías estructurales medidas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this chapter, an asymmetric DSGE model is built in order to account for asymmetries in business cycles. One of the most important contributions of this work is the construction of a general utility function which nests loss aversion, risk aversion and habits formation by means of a smooth transition function. The main idea behind this asymmetric utility function is that under recession the agents over-smooth consumption and leisure choices in order to prevent a huge deviation of them from the reference level of the utility; while under boom, the agents simply smooth consumption and leisure, but trying to be as far as possible from the reference level of utility. The simulations of this model by means of Perturbations Method show that it is possible to reproduce asymmetrical business cycles where recession (on shock) are stronger than booms and booms are more long-lasting than recession. One additional and unexpected result is a downward stickiness displayed by real wages. As a consequence of this, there is a more persistent fall in employment in recession than in boom. Thus, the model reproduces not only asymmetrical business cycles but also real stickiness and hysteresis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabalho tem como objectivo avaliar se os valores de dose recebida pelos doentes que realizam exames de Tomografia Computorizada (TC) Abdómen-pélvicos em dois hospitais da grande Lisboa estão de acordo com os Níveis de Referência de Diagnóstico Europeus (NRD). Foram comparadas também as unidades dosimétricas de exames realizados com corrente modelada e de exames realizados com corrente contínua em ambos os hospitais. O estudo consistiu na recolha de dados de 200 exames de TC Abdómen-pélvicos, 100 em cada hospital. No Hospital A, a média dos valores de DLPfoi 562,34 mGy.cm e a média dos valores de CTDIvol foi 12,06 mGy. No Hospital B, a média dos valores DLPe CTDIvol foi 767,14 mGy.cm e 15,02 mGy, respectivamente. No total da nossa amostra, concluiu-se que em nenhum exame os valores de unidades dosimétricas ultrapassam os valores dos NRD. Verificou-se também que os exames realizados no Hospital B implicam em média uma maior dose de radiação ionizante para o doente.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este estudo teve como objetivo avaliar e comparar as doses de radiação recolhidas numa amostra de 69 pacientes, em dois hospitais, com diferentes métodos de aquisição de imagem digital, direta e indireta, que realizaram radiografia de tórax, em projeção postero-anterior (PA). Para os dois hospitais, a dose à entrada da pele (DEP) e efectiva (E), foram medidas usando o software PCXMC para comparação entre si e com referências internacionais. No Hospital A, com aquisição digital direta, a média de DEP foi de 0,089 mGy e a média de E foi 0,013 mSv. No Hospital B, com aquisição digital de indireta, a média de DEP foi de 0.151 mGy e a média de E foi 0.030mSv. Em ambos os hospitais, as doses médias não ultrapassaram os limites recomendados por lei (0,3 mGy). Para a radiografia de tórax PA, o nível de referência diagnostico (NRD) local calculado foi 0.107 mGy, para o Hospital A e 0.164 mGy, para o Hospital B. Na radiografia de tórax PA, a utilização de um sistema de aquisição direta implicou uma redução de dose de 41 %, concordante com as referências disponíveis que apontam para a redução da dose de cerca de 50 % entre os dois sistemas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A survey of pediatric radiological examinations was carried out in a reference pediatric hospital of the city of Sao Paulo. in order to investigate the doses to children undergoing conventional X-ray examinations. The results showed that the majority of pediatric patients are below 4 years, and that about 80% of the examinations correspond to chest projections. Doses to typical radiological examinations were measured in vivo with thermoluminescent dosimeters (LiF: Mg, Ti and LiF: Mg, Cu, P) attached to the skin of the children to determine entrance surface dose (ESD). Also homogeneous phantoms were used to obtain ESD to younger children, because the technique uses a so small kVp that the dosimeters would produce an artifact image in the patient radiograph. Four kinds of pediatric examinations were investigated: three conventional examinations (chest, skull and abdomen) and a fluoroscopic procedure (barium swallow). Relevant information about kVp and mAs values used in the examinations was collected, and we discuss how these parameters can affect the ESD. The ESD values measured in this work are compared to reference levels published by the European Commission for pediatric patients. The results obtained (third-quartile of the ESD distribution) for chest AP examinations in three age groups were: 0.056 mGy (2-4 years old); 0,068 mGy (5-9 years old)-. 0.069 mGy (10-15 years old). All of them are below the European reference level (0.100mGy). ESD values measured to the older age group in skull and abdomen AP radiographs (mean values 3.44 and 1.20mGy, respectively) are above the European reference levels (1.5mGy to skull and 1.0 mGy to abdomen). ESD values measured in the barium swallow examination reached 10 mGy in skin regions corresponding to thyroid and esophagus. It was noticed during this survey that some technicians use, improperly, X-ray fluoroscopy in conventional examinations to help them in positioning the patient. The results presented here are a preliminary survey of doses in pediatric radiological examinations and they show that it is necessary to investigate the technical parameters to perform the radiographs. to introduce practices to control pediatric patient`s doses and to improve the personnel training to perform a pediatric examination. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Apresentamos três novos métodos estáveis de inversão gravimétrica para estimar o relevo de uma interface arbitrária separando dois meios. Para a garantia da estabilidade da solução, introduzimos informações a priori sobre a interface a ser mapeada, através da minimização de um (ou mais) funcional estabilizante. Portanto, estes três métodos se diferenciam pelos tipos de informação físico-geológica incorporados. No primeiro método, denominado suavidade global, as profundidades da interface são estimadas em pontos discretos, presumindo-se o conhecimento a priori sobre o contraste de densidade entre os meios. Para a estabilização do problema inverso introduzimos dois vínculos: (a) proximidade entre as profundidades estimadas e verdadeiras da interface em alguns pontos fornecidas por furos de sondagem; e (b) proximidade entre as profundidades estimadas em pontos adjacentes. A combinação destes dois vínculos impõe uma suavidade uniforme a toda interface estimada, minimizando, simultaneamente em alguns pontos, os desajustes entre as profundidades conhecidas pelas sondagens e as estimadas nos mesmos pontos. O segundo método, denominado suavidade ponderada, estima as profundidades da interface em pontos discretos, admitindo o conhecimento a priori do contraste de densidade. Neste método, incorpora-se a informação geológica que a interface é suave, exceto em regiões de descontinuidades produzidas por falhas, ou seja, a interface é predominantemente suave porém localmente descontínua. Para a incorporação desta informação, desenvolvemos um processo iterativo em que três tipos de vínculos são impostos aos parâmetros: (a) ponderação da proximidade entre as profundidades estimadas em pontos adjacentes; (b) limites inferior e superior para as profundidades; e (c) proximidade entre todas as profundidades estimadas e um valor numérico conhecido. Inicializando com a solução estimada pelo método da suavidade global, este segundo método, iterativamente, acentua as feições geométricas presentes na solução inicial; ou seja, regiões suaves da interface tendem a tornar-se mais suaves e regiões abruptas tendem a tornar-se mais abruptas. Para tanto, este método atribui diferentes pesos ao vínculo de proximidade entre as profundidades adjacentes. Estes pesos são automaticamente atualizados de modo a acentuar as descontinuidades sutilmente detectadas pela solução da suavidade global. Os vínculos (b) e (c) são usados para compensar a perda da estabilidade, devida à introdução de pesos próximos a zero em alguns dos vínculos de proximidade entre parâmetros adjacentes, e incorporar a informação a priori que a região mais profunda da interface apresenta-se plana e horizontal. O vínculo (b) impõe, de modo estrito, que qualquer profundidade estimada é não negativa e menor que o valor de máxima profundidade da interface conhecido a priori; o vínculo (c) impõe que todas as profundidades estimadas são próximas a um valor que deliberadamente viola a profundidade máxima da interface. O compromisso entre os vínculos conflitantes (b) e (c) resulta na tendenciosidade da solução final em acentuar descontinuidades verticais e apresentar uma estimativa suave e achatada da região mais profunda. O terceiro método, denominado mínimo momento de inércia, estima os contrastes de densidade de uma região da subsuperfície discretizada em volumes elementares prismáticos. Este método incorpora a informação geológica que a interface a ser mapeada delimita uma fonte anômala que apresenta dimensões horizontais maiores que sua maior dimensão vertical, com bordas mergulhando verticalmente ou em direção ao centro de massa e que toda a massa (ou deficiência de massa) anômala está concentrada, de modo compacto, em torno de um nível de referência. Conceitualmente, estas informações são introduzidas pela minimização do momento de inércia das fontes em relação ao nível de referência conhecido a priori. Esta minimização é efetuada em um subespaço de parâmetros consistindo de fontes compactas e apresentando bordas mergulhando verticalmente ou em direção ao centro de massa. Efetivamente, estas informações são introduzidas através de um processo iterativo inicializando com uma solução cujo momento de inércia é próximo a zero, acrescentando, em cada iteração, uma contribuição com mínimo momento de inércia em relação ao nível de referência, de modo que a nova estimativa obedeça a limites mínimo e máximo do contraste de densidade, e minimize, simultaneamente, os desajustes entre os dados gravimétricos observados e ajustados. Adicionalmente, o processo iterativo tende a "congelar" as estimativas em um dos limites (mínimo ou máximo). O resultado final é uma fonte anômala compactada em torno do nível de referência cuja distribuição de constraste de densidade tende ao limite superior (em valor absoluto) estabelecido a priori. Estes três métodos foram aplicados a dados sintéticos e reais produzidos pelo relevo do embasamento de bacias sedimentares. A suavidade global produziu uma boa reconstrução do arcabouço de bacias que violam a condição de suavidade, tanto em dados sintéticos como em dados da Bacia do Recôncavo. Este método, apresenta a menor resolução quando comparado com os outros dois métodos. A suavidade ponderada produziu uma melhoria na resolução de relevos de embasamentos que apresentam falhamentos com grandes rejeitos e altos ângulos de mergulho, indicando uma grande potencialidade na interpretação do arcabouço de bacias extensionais, como mostramos em testes com dados sintéticos e dados do Steptoe Valley, Nevada, EUA, e da Bacia do Recôncavo. No método do mínimo momento de inércia, tomou-se como nível de referência o nível médio do terreno. As aplicações a dados sintéticos e às anomalias Bouguer do Graben de San Jacinto, California, EUA, e da Bacia do Recôncavo mostraram que, em comparação com os métodos da suavidade global e ponderada, este método estima com excelente resolução falhamentos com pequenos rejeitos sem impor a restrição da interface apresentar poucas descontinuidades locais, como no método da suavidade ponderada.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the instrumental records of daily precipitation, we often encounter one or more periods in which values below some threshold were not registered. Such periods, besides lacking small values, also have a large number of dry days. Their cumulative distribution function is shifted to the right in relation to that for other portions of the record having more reliable observations. Such problems are examined in this work, based mostly on the two-sample Kolmogorov–Smirnov (KS) test, where the portion of the series with more number of dry days is compared with the portion with less number of dry days. Another relatively common problem in daily rainfall data is the prevalence of integers either throughout the period of record or in some part of it, likely resulting from truncation during data compilation prior to archiving or by coarse rounding of daily readings by observers. This problem is identified by simple calculation of the proportion of integers in the series, taking the expected proportion as 10%. The above two procedures were applied to the daily rainfall data sets from the European Climate Assessment (ECA), Southeast Asian Climate Assessment (SACA), and Brazilian Water Resources Agency (BRA). Taking the statistic D of the KS test >0.15 and the corresponding p-value <0.001 as the condition to classify a given series as suspicious, the proportions of the ECA, SACA, and BRA series falling into this category are, respectively, 34.5%, 54.3%, and 62.5%. With relation to coarse rounding problem, the proportions of series exceeding twice the 10% reference level are 3%, 60%, and 43% for the ECA, SACA, and BRA data sets, respectively. A simple way to visualize the two problems addressed here is by plotting the time series of daily rainfall for a limited range, for instance, 0–10 mm day−1.