860 resultados para Mean diameter
Resumo:
There is almost not a case in exploration geology, where the studied data doesn’t includes below detection limits and/or zero values, and since most of the geological data responds to lognormal distributions, these “zero data” represent a mathematical challenge for the interpretation. We need to start by recognizing that there are zero values in geology. For example the amount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-exists with nepheline. Another common essential zero is a North azimuth, however we can always change that zero for the value of 360°. These are known as “Essential zeros”, but what can we do with “Rounded zeros” that are the result of below the detection limit of the equipment? Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimes we need to differentiate between a sodic and a potassic alteration. Pre-classification into groups requires a good knowledge of the distribution of the data and the geochemical characteristics of the groups which is not always available. Considering the zero values equal to the limit of detection of the used equipment will generate spurious distributions, especially in ternary diagrams. Same situation will occur if we replace the zero values by a small amount using non-parametric or parametric techniques (imputation). The method that we are proposing takes into consideration the well known relationships between some elements. For example, in copper porphyry deposits, there is always a good direct correlation between the copper values and the molybdenum ones, but while copper will always be above the limit of detection, many of the molybdenum values will be “rounded zeros”. So, we will take the lower quartile of the real molybdenum values and establish a regression equation with copper, and then we will estimate the “rounded” zero values of molybdenum by their corresponding copper values. The method could be applied to any type of data, provided we establish first their correlation dependency. One of the main advantages of this method is that we do not obtain a fixed value for the “rounded zeros”, but one that depends on the value of the other variable. Key words: compositional data analysis, treatment of zeros, essential zeros, rounded zeros, correlation dependency
Resumo:
Introducción: El glaucoma representa la tercera causa de ceguera a nivel mundial y un diagnóstico oportuno requiere evaluar la excavación del nervio óptico que está relacionada con el área del mismo. Existen reportes de áreas grandes (macrodiscos) que pueden ser protectoras, mientras otros las asocian a susceptibilidad para glaucoma. Objetivo: Establecer si existe asociación entre macrodisco y glaucoma en individuos estudiados con Tomografía Optica Coherente (OCT ) en la Fundación Oftalmológica Nacional. Métodos: Estudio transversal de asociación que incluyó 25 ojos con glaucoma primario de ángulo abierto y 74 ojos sanos. A cada individuo se realizó examen oftalmológico, campo visual computarizado y OCT de nervio óptico. Se compararon por grupos áreas de disco óptico y número de macrodiscos, definidos según Jonas como un área de la media más dos desviaciones estándar y según Adabache como área ≥3.03 mm2 quien evaluó población Mexicana. Resultados: El área promedio de disco óptico fue 2,78 y 2,80 mm2 glaucoma Vs. sanos. De acuerdo al criterio de Jonas, se observó un macrodisco en el grupo sanos y según criterio de Adabache se encontraron ocho y veinticinco macrodiscos glaucoma Vs. sanos. (OR=0,92 IC95%=0.35 – 2.43). Discusión: No hubo diferencia significativa (P=0.870) en el área de disco entre los dos grupos y el porcentaje de macrodiscos para los dos grupos fue similar, aunque el bajo número de éstos no permitió concluir en términos estadísticos sobre la presencia de macrodisco y glaucoma.
Resumo:
Introducción A pesar de que los nevos melanocíticos son un motivo de consulta frecuente en nuestra población no existen estudios a nivel de Colombia acerca de su tratamiento, a nivel mundial existe muy poca literatura al respecto por lo que hay un vacío conceptual en este campo. Objetivos Evaluar los cambios en cuanto a la presencia de pigmento y cicatrización, en los nevos melanocíticos adquiridos tratados con láser, basados en la experiencia de un solo centro en Bogotá. Materiales y métodos Es un estudio observacional de antes y después, en una cohorte histórica, de 90 casos de nevos melanocíticos adquiridos, tratados con láser en Uniláser Medica, en los que se evaluó la presencia de pigmento, cicatrización, y otras variables, con un control realizado a no menos de 3 meses de la intervención. Resultados Se encontró un rango de edad entre los 18 -51 años, promedio 27,59 años; fototipo de III-V; en el 32% de los casos, solo fue requerida una sesión de láser de Co2 y Erbio, para el aclaramiento completo de la misma. La duración del eritema en el 54,4% los casos fue de 1 a 3 meses. En un 64,4% quedó pigmento residual al control, pero de éstos casos el 48,2% fue entre un 5 a un 10% del inicial. El 58,9% hizo cicatriz, de éstos el 63% fue estética. La satisfacción por parte de los pacientes es alta a pesar de la persistencia pigmentaria y/o la presencia de cicatriz. Discusión El tratamiento de nevos melanocíticos adquiridos con láser es una opción terapéutica que genera cambios estadísticamente significativos en cuanto a pigmento, cicatriz estética y alta satisfacción por parte de los pacientes. Se requieren estudios, analíticos, para determinar eficacia del tratamiento.
Resumo:
Objetivo: Describir los eventos ocurridos en el seguimiento a largo plazo de pacientes llevados a cierre percutáneo de FOP y CIA con dispositivo Amplatzer® Materiales y métodos: Estudio de seguimiento en donde se seleccionó una cohorte histórica de pacientes llevados a cierre percutáneo del FOP y CIA con dispositivo Amplatzer desde el año 2001 hasta el 2013, De los 92 (100%) pacientes intervenidos, se realizó seguimiento clínico a 55 (60%) pacientes, y 37 (40%) pacientes no se pudieron contactar, se revisaron registros médicos y se realizarón entrevistas telefónicas. Resultados: La edad promedio de los pacientes fue de 58 años, con una mediana de 62 años, el 73% de las intervenciones fueron realizadas en mujeres. Se realizaron 30 (55%) cierres percutáneos de CIA y 25 (45%) cierres de FOP, se presentaron dos complicaciones secundarias al procedimiento 3.6% (reacción alérgica y hematoma hepático), el diámetro del defecto septal fue 15 mm (DE 9),y una mediana de 16 mm, El tamaño del dispositivo implantado fue de 40 mm (DE 3.9 mm) (13 mm y 34 mm). El seguimiento registró un tiempo promedio de 44 meses (DE 28,6), (7-114 meses) con una mediana de 36 meses, no se registraron eventos, la probabilidad de supervivencia de este grupo de pacientes fue del 100% y la probabilidad de muerte fue del 0%.
Resumo:
Una sitja és una cavitat subterrània destinada a emmagatzemar la collita, especialment de cereals. Amb el manteniment d'unes condicions ideal de temperatura i humitat els cereals s'hi poden conservar durant un llarg període de temps, que segons Varró podria arribar als 50 anys. Aquestes excepcionals possibilitats han possibilitat que l'emmagatzematge en sitges fos un dels mètodes de conservació de cereals a llarg termini més utilitzat en les societats pre-industrials de tot el món. La sitja estàndard del nord-est de Catalunya és aquella que era excavada a l'argila, no portava revestiment i tenia la boca en forma de tub, de 0,77 m de diàmetre màxim per 0,42 de profunditat. El perfil era de tipus còncau, amb el diàmetre màxim situat en el terç central de la sitja, i un fons indistintament còncau o pla. La profunditat i el diàmetre màxim es situarien entre 1,75 i 2 m., amb un marge de diferència reduïdíssim entre ambdues mesures. La capacitat resultant d'aquestes dimensions se situaria entre 1 i 3 tones de cereals, que en termes estàndards de producció seria el resultat de la collita d'una extensió d'entre 1,5 i 4 hectàrees de terreny. ASBTRACT: A silo is an underground cavity designed to store the harvest, especially grain. With the maintenance of ideal conditions of temperature and moisture grains can be preserved for a long period of time, according to Varró it could reach 50 years. These exceptional opportunities have enabled the storage silos to be one of the methods of long-term conservation of grain used in most pre-industrial societies around the world. The standard silo from the North-East of Catalonia was excavated in clay,it had no siding and its mouth was tube-shaped, up to 0.77 m of maximum diameter to 0.42 deep. The profile was concave, with maximum diameter located in the central third of the silo, and a background either concave or flat. The depth and maximum diameter are located between 1.75 and 2 m, with a very little margin of difference between the two measures. The capacity resulting from these dimensions would be located between 1 and 3 tons of cereals, which in terms of production standards it would mean a harvest of between 1.5 and 4 hectares of ground.
Resumo:
The behavior of the Asian summer monsoon is documented and compared using the European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis (ERA) and the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) Reanalysis. In terms of seasonal mean climatologies the results suggest that, in several respects, the ERA is superior to the NCEP-NCAR Reanalysis. The overall better simulation of the precipitation and hence the diabatic heating field over the monsoon domain in ERA means that the analyzed circulation is probably nearer reality. In terms of interannual variability, inconsistencies in the definition of weak and strong monsoon years based on typical monsoon indices such as All-India Rainfall (AIR) anomalies and the large-scale wind shear based dynamical monsoon index (DMI) still exist. Two dominant modes of interannual variability have been identified that together explain nearly 50% of the variance. Individually, they have many features in common with the composite flow patterns associated with weak and strong monsoons, when defined in terms of regional AIR anomalies and the large-scale DMI. The reanalyses also show a common dominant mode of intraseasonal variability that describes the latitudinal displacement of the tropical convergence zone from its oceanic-to-continental regime and essentially captures the low-frequency active/break cycles of the monsoon. The relationship between interannual and intraseasonal variability has been investigated by considering the probability density function (PDF) of the principal component of the dominant intraseasonal mode. Based on the DMI, there is an indication that in years with a weaker monsoon circulation, the PDF is skewed toward negative values (i,e., break conditions). Similarly, the PDFs for El Nino and La Nina years suggest that El Nino predisposes the system to more break spells, although the sample size may limit the statistical significance of the results.
Resumo:
The impact of doubled CO2 concentration on the Asian summer monsoon is studied using a coupled ocean-atmosphere model. Both the mean seasonal precipitation and interannual monsoon variability are found to increase in the future climate scenario presented. Systematic biases in current climate simulations of the coupled system prevent accurate representation of the monsoon-ENSO teleconnection, of prime importance for seasonal prediction and for determining monsoon interannual variability. By applying seasonally varying heat flux adjustments to the tropical Pacific and Indian Ocean surface in the future climate simulation, some assessment can be made of the impact of systematic model biases on future climate predictions. In simulations where the flux adjustments are implemented, the response to climate change is magnified, with the suggestion that systematic biases may be masking the true impact of increased greenhouse gas forcing. The teleconnection between ENSO and the Asian summer monsoon remains robust in the future climate, although the Indo-Pacific takes on more of a biennial character for long periods of the flux-adjusted simulation. Assessing the teleconnection across interdecadal timescales shows wide variations in its amplitude, despite the absence of external forcing. This suggests that recent changes in the observed record cannot be distinguished from internal variations and as such are not necessarily related to climate change.
Combining altimetric/gravimetric and ocean model mean dynamic topography models in the GOCINA region
Resumo:
The atmospheric circulation changes predicted by climate models are often described using sea level pressure, which generally shows a strengthening of the mid-latitude westerlies. Recent observed variability is dominated by the Northern Annular Mode (NAM) which is equivalent barotropic, so that wind variations of the same sign are seen at all levels. However, in model predictions of the response to anthropogenic forcing, there is a well-known enhanced warming at low levels over the northern polar cap in winter. This means that there is a strong baroclinic component to the response. The projection of the response onto a NAM-like zonal index varies with height. While at the surface most models project positively onto the zonal index, throughout most of the depth of the troposphere many of the models give negative projections. The response to anthropogenic forcing therefore has a distinctive baroclinic signature which is very different to the NAM
Resumo:
In principle the global mean geostrophic surface circulation of the ocean can be diagnosed by subtracting a geoid from a mean sea surface (MSS). However, because the resulting mean dynamic topography (MDT) is approximately two orders of magnitude smaller than either of the constituent surfaces, and because the geoid is most naturally expressed as a spectral model while the MSS is a gridded product, in practice complications arise. Two algorithms for combining MSS and satellite-derived geoid data to determine the ocean’s mean dynamic topography (MDT) are considered in this paper: a pointwise approach, whereby the gridded geoid height field is subtracted from the gridded MSS; and a spectral approach, whereby the spherical harmonic coefficients of the geoid are subtracted from an equivalent set of coefficients representing the MSS, from which the gridded MDT is then obtained. The essential difference is that with the latter approach the MSS is truncated, a form of filtering, just as with the geoid. This ensures that errors of omission resulting from the truncation of the geoid, which are small in comparison to the geoid but large in comparison to the MDT, are matched, and therefore negated, by similar errors of omission in the MSS. The MDTs produced by both methods require additional filtering. However, the spectral MDT requires less filtering to remove noise, and therefore it retains more oceanographic information than its pointwise equivalent. The spectral method also results in a more realistic MDT at coastlines. 1. Introduction An important challenge in oceanography is the accurate determination of the ocean’s time-mean dynamic topography (MDT). If this can be achieved with sufficient accuracy for combination with the timedependent component of the dynamic topography, obtainable from altimetric data, then the resulting sum (i.e., the absolute dynamic topography) will give an accurate picture of surface geostrophic currents and ocean transports.
Resumo:
Previous assessments of the impacts of climate change on heat-related mortality use the "delta method" to create temperature projection time series that are applied to temperature-mortality models to estimate future mortality impacts. The delta method means that climate model bias in the modelled present does not influence the temperature projection time series and impacts. However, the delta method assumes that climate change will result only in a change in the mean temperature but there is evidence that there will also be changes in the variability of temperature with climate change. The aim of this paper is to demonstrate the importance of considering changes in temperature variability with climate change in impacts assessments of future heat-related mortality. We investigate future heatrelated mortality impacts in six cities (Boston, Budapest, Dallas, Lisbon, London and Sydney) by applying temperature projections from the UK Meteorological Office HadCM3 climate model to the temperature-mortality models constructed and validated in Part 1. We investigate the impacts for four cases based on various combinations of mean and variability changes in temperature with climate change. The results demonstrate that higher mortality is attributed to increases in the mean and variability of temperature with climate change rather than with the change in mean temperature alone. This has implications for interpreting existing impacts estimates that have used the delta method. We present a novel method for the creation of temperature projection time series that includes changes in the mean and variability of temperature with climate change and is not influenced by climate model bias in the modelled present. The method should be useful for future impacts assessments. Few studies consider the implications that the limitations of the climate model may have on the heatrelated mortality impacts. Here, we demonstrate the importance of considering this by conducting an evaluation of the daily and extreme temperatures from HadCM3, which demonstrates that the estimates of future heat-related mortality for Dallas and Lisbon may be overestimated due to positive climate model bias. Likewise, estimates for Boston and London may be underestimated due to negative climate model bias. Finally, we briefly consider uncertainties in the impacts associated with greenhouse gas emissions and acclimatisation. The uncertainties in the mortality impacts due to different emissions scenarios of greenhouse gases in the future varied considerably by location. Allowing for acclimatisation to an extra 2°C in mean temperatures reduced future heat-related mortality by approximately half that of no acclimatisation in each city.