987 resultados para Zero reference level
Resumo:
Human exposure to Bisphenol A (BPA) results mainly from ingestion of food and beverages. Information regarding BPA effects on colon cancer, one of the major causes of death in developed countries, is still scarce. Likewise, little is known about BPA drug interactions although its potential role in doxorubicin (DOX) chemoresistance has been suggested. This study aims to assess potential interactions between BPA and DOX on HT29 colon cancer cells. HT29 cell response was evaluated after exposure to BPA, DOX, or co-exposure to both chemicals. Transcriptional analysis of several cancer-associated genes (c-fos, AURKA, p21, bcl-xl and CLU) shows that BPA exposure induces slight up-regulation exclusively of bcl-xl without affecting cell viability. On the other hand, a sub-therapeutic DOX concentration (40nM) results in highly altered c-fos, bcl-xl, and CLU transcript levels, and this is not affected by co-exposure with BPA. Conversely, DOX at a therapeutic concentration (4μM) results in distinct and very severe transcriptional alterations of c-fos, AURKA, p21 and CLU that are counteracted by co-exposure with BPA resulting in transcript levels similar to those of control. Co-exposure with BPA slightly decreases apoptosis in relation to DOX 4μM alone without affecting DOX-induced loss of cell viability. These results suggest that BPA exposure can influence chemotherapy outcomes and therefore emphasize the necessity of a better understanding of BPA interactions with chemotherapeutic agents in the context of risk assessment.
Resumo:
Objective To suggest a national value for the diagnostic reference level (DRL) in terms of activity in MBq.kg–1, for nuclear medicine procedures with fluorodeoxyglucose (18F-FDG) in whole body positron emission tomography (PET) scans of adult patients. Materials and Methods A survey on values of 18F-FDG activity administered in Brazilian clinics was undertaken by means of a questionnaire including questions about number and manufacturer of the installed equipment, model and detector type. The suggested DRL value was based on the calculation of the third quartile of the activity values distribution reported by the clinics. Results Among the surveyed Brazilian clinics, 58% responded completely or partially the questionnaire; and the results demonstrated variation of up to 100% in the reported radiopharmaceutical activity. The suggested DRL for 18F-FDG/PET activity was 5.54 MBq.kg–1 (0.149 mCi.kg–1). Conclusion The present study has demonstrated the lack of standardization in administered radiopharmaceutical activities for PET procedures in Brazil, corroborating the necessity of an official DRL value to be adopted in the country. The suggested DLR value demonstrates that there is room for optimization of the procedures and 18F-FDG/PET activities administered in Brazilian clinics to reduce the doses delivered to patients. It is important to highlight that this value should be continually revised and optimized at least every five years.
Resumo:
We study an intertemporal asset pricing model in which a representative consumer maximizes expected utility derived from both the ratio of his consumption to some reference level and this level itself. If the reference consumption level is assumed to be determined by past consumption levels, the model generalizes the usual habit formation specifications. When the reference level growth rate is made dependent on the market portfolio return and on past consumption growth, the model mixes a consumption CAPM with habit formation together with the CAPM. It therefore provides, in an expected utility framework, a generalization of the non-expected recursive utility model of Epstein and Zin (1989). When we estimate this specification with aggregate per capita consumption, we obtain economically plausible values of the preference parameters, in contrast with the habit formation or the Epstein-Zin cases taken separately. All tests performed with various preference specifications confirm that the reference level enters significantly in the pricing kernel.
Resumo:
Human exposure to Bisphenol A (BPA) results mainly from ingestion of food and beverages. Information regarding BPA effects on colon cancer, one of the major causes of death in developed countries, is still scarce. Likewise, little is known about BPA drug interactions although its potential role in doxorubicin (DOX) chemoresistance has been suggested. This study aims to assess potential interactions between BPA and DOX on HT29 colon cancer cells. HT29 cell response was evaluated after exposure to BPA, DOX, or co-exposure to both chemicals. Transcriptional analysis of several cancer-associated genes (c-fos, AURKA, p21, bcl-xl and CLU) shows that BPA exposure induces slight up-regulation exclusively of bcl-xl without affecting cell viability. On the other hand, a sub-therapeutic DOX concentration (40 nM) results in highly altered c-fos, bcl-xl, and CLU transcript levels, and this is not affected by co-exposure with BPA. Conversely, DOX at a therapeutic concentration (4 μM) results in distinct and very severe transcriptional alterations of c-fos, AURKA, p21 and CLU that are counteracted by co-exposure with BPA resulting in transcript levels similar to those of control. Co-exposure with BPA slightly decreases apoptosis in relation to DOX 4 μM alone without affecting DOX-induced loss of cell viability. These results suggest that BPA exposure can influence chemotherapy outcomes and therefore emphasize the necessity of a better understanding of BPA interactions with chemotherapeutic agents in the context of risk assessment.
Resumo:
The results of an investigation on the limits of the random errors contained in the basic data of Physical Oceanography and their propagation through the computational procedures are presented in this thesis. It also suggest a method which increases the reliability of the derived results. The thesis is presented in eight chapters including the introductory chapter. Chapter 2 discusses the general theory of errors that are relevant in the context of the propagation of errors in Physical Oceanographic computations. The error components contained in the independent oceanographic variables namely, temperature, salinity and depth are deliniated and quantified in chapter 3. Chapter 4 discusses and derives the magnitude of errors in the computation of the dependent oceanographic variables, density in situ, gt, specific volume and specific volume anomaly, due to the propagation of errors contained in the independent oceanographic variables. The errors propagated into the computed values of the derived quantities namely, dynamic depth and relative currents, have been estimated and presented chapter 5. Chapter 6 reviews the existing methods for the identification of level of no motion and suggests a method for the identification of a reliable zero reference level. Chapter 7 discusses the available methods for the extension of the zero reference level into shallow regions of the oceans and suggests a new method which is more reliable. A procedure of graphical smoothening of dynamic topographies between the error limits to provide more reliable results is also suggested in this chapter. Chapter 8 deals with the computation of the geostrophic current from these smoothened values of dynamic heights, with reference to the selected zero reference level. The summary and conclusion are also presented in this chapter.
Resumo:
Apresentamos três novos métodos estáveis de inversão gravimétrica para estimar o relevo de uma interface arbitrária separando dois meios. Para a garantia da estabilidade da solução, introduzimos informações a priori sobre a interface a ser mapeada, através da minimização de um (ou mais) funcional estabilizante. Portanto, estes três métodos se diferenciam pelos tipos de informação físico-geológica incorporados. No primeiro método, denominado suavidade global, as profundidades da interface são estimadas em pontos discretos, presumindo-se o conhecimento a priori sobre o contraste de densidade entre os meios. Para a estabilização do problema inverso introduzimos dois vínculos: (a) proximidade entre as profundidades estimadas e verdadeiras da interface em alguns pontos fornecidas por furos de sondagem; e (b) proximidade entre as profundidades estimadas em pontos adjacentes. A combinação destes dois vínculos impõe uma suavidade uniforme a toda interface estimada, minimizando, simultaneamente em alguns pontos, os desajustes entre as profundidades conhecidas pelas sondagens e as estimadas nos mesmos pontos. O segundo método, denominado suavidade ponderada, estima as profundidades da interface em pontos discretos, admitindo o conhecimento a priori do contraste de densidade. Neste método, incorpora-se a informação geológica que a interface é suave, exceto em regiões de descontinuidades produzidas por falhas, ou seja, a interface é predominantemente suave porém localmente descontínua. Para a incorporação desta informação, desenvolvemos um processo iterativo em que três tipos de vínculos são impostos aos parâmetros: (a) ponderação da proximidade entre as profundidades estimadas em pontos adjacentes; (b) limites inferior e superior para as profundidades; e (c) proximidade entre todas as profundidades estimadas e um valor numérico conhecido. Inicializando com a solução estimada pelo método da suavidade global, este segundo método, iterativamente, acentua as feições geométricas presentes na solução inicial; ou seja, regiões suaves da interface tendem a tornar-se mais suaves e regiões abruptas tendem a tornar-se mais abruptas. Para tanto, este método atribui diferentes pesos ao vínculo de proximidade entre as profundidades adjacentes. Estes pesos são automaticamente atualizados de modo a acentuar as descontinuidades sutilmente detectadas pela solução da suavidade global. Os vínculos (b) e (c) são usados para compensar a perda da estabilidade, devida à introdução de pesos próximos a zero em alguns dos vínculos de proximidade entre parâmetros adjacentes, e incorporar a informação a priori que a região mais profunda da interface apresenta-se plana e horizontal. O vínculo (b) impõe, de modo estrito, que qualquer profundidade estimada é não negativa e menor que o valor de máxima profundidade da interface conhecido a priori; o vínculo (c) impõe que todas as profundidades estimadas são próximas a um valor que deliberadamente viola a profundidade máxima da interface. O compromisso entre os vínculos conflitantes (b) e (c) resulta na tendenciosidade da solução final em acentuar descontinuidades verticais e apresentar uma estimativa suave e achatada da região mais profunda. O terceiro método, denominado mínimo momento de inércia, estima os contrastes de densidade de uma região da subsuperfície discretizada em volumes elementares prismáticos. Este método incorpora a informação geológica que a interface a ser mapeada delimita uma fonte anômala que apresenta dimensões horizontais maiores que sua maior dimensão vertical, com bordas mergulhando verticalmente ou em direção ao centro de massa e que toda a massa (ou deficiência de massa) anômala está concentrada, de modo compacto, em torno de um nível de referência. Conceitualmente, estas informações são introduzidas pela minimização do momento de inércia das fontes em relação ao nível de referência conhecido a priori. Esta minimização é efetuada em um subespaço de parâmetros consistindo de fontes compactas e apresentando bordas mergulhando verticalmente ou em direção ao centro de massa. Efetivamente, estas informações são introduzidas através de um processo iterativo inicializando com uma solução cujo momento de inércia é próximo a zero, acrescentando, em cada iteração, uma contribuição com mínimo momento de inércia em relação ao nível de referência, de modo que a nova estimativa obedeça a limites mínimo e máximo do contraste de densidade, e minimize, simultaneamente, os desajustes entre os dados gravimétricos observados e ajustados. Adicionalmente, o processo iterativo tende a "congelar" as estimativas em um dos limites (mínimo ou máximo). O resultado final é uma fonte anômala compactada em torno do nível de referência cuja distribuição de constraste de densidade tende ao limite superior (em valor absoluto) estabelecido a priori. Estes três métodos foram aplicados a dados sintéticos e reais produzidos pelo relevo do embasamento de bacias sedimentares. A suavidade global produziu uma boa reconstrução do arcabouço de bacias que violam a condição de suavidade, tanto em dados sintéticos como em dados da Bacia do Recôncavo. Este método, apresenta a menor resolução quando comparado com os outros dois métodos. A suavidade ponderada produziu uma melhoria na resolução de relevos de embasamentos que apresentam falhamentos com grandes rejeitos e altos ângulos de mergulho, indicando uma grande potencialidade na interpretação do arcabouço de bacias extensionais, como mostramos em testes com dados sintéticos e dados do Steptoe Valley, Nevada, EUA, e da Bacia do Recôncavo. No método do mínimo momento de inércia, tomou-se como nível de referência o nível médio do terreno. As aplicações a dados sintéticos e às anomalias Bouguer do Graben de San Jacinto, California, EUA, e da Bacia do Recôncavo mostraram que, em comparação com os métodos da suavidade global e ponderada, este método estima com excelente resolução falhamentos com pequenos rejeitos sem impor a restrição da interface apresentar poucas descontinuidades locais, como no método da suavidade ponderada.
Resumo:
BACKGROUND: Physiological data obtained with the pulmonary artery catheter (PAC) are susceptible to errors in measurement and interpretation. Little attention has been paid to the relevance of errors in hemodynamic measurements performed in the intensive care unit (ICU). The aim of this study was to assess the errors related to the technical aspects (zeroing and reference level) and actual measurement (curve interpretation) of the pulmonary artery occlusion pressure (PAOP). METHODS: Forty-seven participants in a special ICU training program and 22 ICU nurses were tested without pre-announcement. All participants had previously been exposed to the clinical use of the method. The first task was to set up a pressure measurement system for PAC (zeroing and reference level) and the second to measure the PAOP. RESULTS: The median difference from the reference mid-axillary zero level was - 3 cm (-8 to + 9 cm) for physicians and -1 cm (-5 to + 1 cm) for nurses. The median difference from the reference PAOP was 0 mmHg (-3 to 5 mmHg) for physicians and 1 mmHg (-1 to 15 mmHg) for nurses. When PAOP values were adjusted for the differences from the reference transducer level, the median differences from the reference PAOP values were 2 mmHg (-6 to 9 mmHg) for physicians and 2 mmHg (-6 to 16 mmHg) for nurses. CONCLUSIONS: Measurement of the PAOP is susceptible to substantial error as a result of practical mistakes. Comparison of results between ICUs or practitioners is therefore not possible.
Resumo:
The presented database contains time-referenced sea ice draft values from upward looking sonar (ULS) measurements in the Weddell Sea, Antarctica. The sea ice draft data can be used to infer the thickness of the ice. They were collected during the period 1990-2008. In total, the database includes measurements from 13 locations in the Weddell Sea and was generated from more than 3.7 million measurements of sea ice draft. The files contain uncorrected raw drafts, corrected drafts and the basic parameters measured by the ULS. The measurement principle, the data processing procedure and the quality control are described in detail. To account for the unknown speed of sound in the water column above the ULS, two correction methods were applied to the draft data. The first method is based on defining a reference level from the identification of open water leads. The second method uses a model of sound speed in the oceanic mixed layer and is applied to ice draft in austral winter. Both methods are discussed and their accuracy is estimated. Finally, selected results of the processing are presented.
Resumo:
A clear demonstration of topological superconductivity (TS) and Majorana zero modes remains one of the major pending goals in the field of topological materials. One common strategy to generate TS is through the coupling of an s-wave superconductor to a helical half-metallic system. Numerous proposals for the latter have been put forward in the literature, most of them based on semiconductors or topological insulators with strong spin-orbit coupling. Here, we demonstrate an alternative approach for the creation of TS in graphene-superconductor junctions without the need for spin-orbit coupling. Our prediction stems from the helicity of graphene’s zero-Landau-level edge states in the presence of interactions and from the possibility, experimentally demonstrated, of tuning their magnetic properties with in-plane magnetic fields. We show how canted antiferromagnetic ordering in the graphene bulk close to neutrality induces TS along the junction and gives rise to isolated, topologically protected Majorana bound states at either end. We also discuss possible strategies to detect their presence in graphene Josephson junctions through Fraunhofer pattern anomalies and Andreev spectroscopy. The latter, in particular, exhibits strong unambiguous signatures of the presence of the Majorana states in the form of universal zero-bias anomalies. Remarkable progress has recently been reported in the fabrication of the proposed type of junctions, which offers a promising outlook for Majorana physics in graphene systems.
Resumo:
The quantification of the available energy in the environment is important because it determines photosynthesis, evapotranspiration and, therefore, the final yield of crops. Instruments for measuring the energy balance are costly and indirect estimation alternatives are desirable. This study assessed the Deardorff's model performance during a cycle of a sugarcane crop in Piracicaba, State of São Paulo, Brazil, in comparison to the aerodynamic method. This mechanistic model simulates the energy fluxes (sensible, latent heat and net radiation) at three levels (atmosphere, canopy and soil) using only air temperature, relative humidity and wind speed measured at a reference level above the canopy, crop leaf area index, and some pre-calibrated parameters (canopy albedo, soil emissivity, atmospheric transmissivity and hydrological characteristics of the soil). The analysis was made for different time scales, insolation conditions and seasons (spring, summer and autumn). Analyzing all data of 15 minute intervals, the model presented good performance for net radiation simulation in different insolations and seasons. The latent heat flux in the atmosphere and the sensible heat flux in the atmosphere did not present differences in comparison to data from the aerodynamic method during the autumn. The sensible heat flux in the soil was poorly simulated by the model due to the poor performance of the soil water balance method. The Deardorff's model improved in general the flux simulations in comparison to the aerodynamic method when more insolation was available in the environment.
Resumo:
Dissertação de Mestrado, Ciências Económicas e Empresariais, 6 de Dezembro de 2012, Universidade dos Açores.
Resumo:
Atualmente a Tomografia Computorizada (TC) é o método de imagem que mais contribui para a dose coletiva resultante de exposições médicas. Este estudo pretende determinar os valores de Índice de Dose de TC (CTDI) e produto dose-comprimento (DLP) para os exames de crânio e tórax em adultos num equipamento de TC multidetetores; e efetuar uma análise objetiva e subjetiva da qualidade da imagem. Determinaram-se os valores de CTDI e DLP utilizando uma câmara de ionização e fantomas de crânio e tórax. Efetuou-se ainda uma análise objetiva e subjetiva da qualidade da imagem com o fantoma Catphan® 500 e observadores, respetivamente. Os resultados obtidos foram superiores relativamente às Guidelines europeias no protocolo de crânio (CTDIvol = 80,13 mGy e DLP = 1209,22 mGy.cm) e inferiores no protocolo de tórax (CTDIvol = 8,37 mGy e DLP = 274,71 mGy.cm). Na análise objetiva da qualidade da imagem, à exceção da resolução de baixo contraste no protocolo de crânio, todos os outros critérios analisados estavam em conformidade com a legislação. Na análise subjetiva da qualidade da imagem existiu uma diferença estatisticamente significativa entre as classificações atribuídas pelos observadores às imagens nos parâmetros avaliados (p = 0,000-0,005).
Resumo:
Background: Gene expression analysis has emerged as a major biological research area, with real-time quantitative reverse transcription PCR (RT-QPCR) being one of the most accurate and widely used techniques for expression profiling of selected genes. In order to obtain results that are comparable across assays, a stable normalization strategy is required. In general, the normalization of PCR measurements between different samples uses one to several control genes (e. g. housekeeping genes), from which a baseline reference level is constructed. Thus, the choice of the control genes is of utmost importance, yet there is not a generally accepted standard technique for screening a large number of candidates and identifying the best ones. Results: We propose a novel approach for scoring and ranking candidate genes for their suitability as control genes. Our approach relies on publicly available microarray data and allows the combination of multiple data sets originating from different platforms and/or representing different pathologies. The use of microarray data allows the screening of tens of thousands of genes, producing very comprehensive lists of candidates. We also provide two lists of candidate control genes: one which is breast cancer-specific and one with more general applicability. Two genes from the breast cancer list which had not been previously used as control genes are identified and validated by RT-QPCR. Open source R functions are available at http://www.isrec.isb-sib.ch/similar to vpopovic/research/ Conclusion: We proposed a new method for identifying candidate control genes for RT-QPCR which was able to rank thousands of genes according to some predefined suitability criteria and we applied it to the case of breast cancer. We also empirically showed that translating the results from microarray to PCR platform was achievable.
Resumo:
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.
Resumo:
Climate science indicates that climate stabilization requires low GHG emissions. Is thisconsistent with nondecreasing human welfare?Our welfare or utility index emphasizes education, knowledge, and the environment. Weconstruct and calibrate a multigenerational model with intertemporal links provided by education,physical capital, knowledge and the environment.We reject discounted utilitarianism and adopt, first, the Pure Sustainability Optimization (orIntergenerational Maximin) criterion, and, second, the Sustainable Growth Optimization criterion,that maximizes the utility of the first generation subject to a given future rate of growth. We applythese criteria to our calibrated model via a novel algorithm inspired by the turnpike property.The computed paths yield levels of utility higher than the level at reference year 2000 for allgenerations. They require the doubling of the fraction of labor resources devoted to the creation ofknowledge relative to the reference level, whereas the fractions of labor allocated to consumptionand leisure are similar to the reference ones. On the other hand, higher growth rates requiresubstantial increases in the fraction of labor devoted to education, together with moderate increasesin the fractions of labor devoted to knowledge and the investment in physical capital.