867 resultados para Mean diameter
Resumo:
Globalization and liberalization, with the entry of many prominent foreign manufacturers, changed the automobile scenario in India, since early 1990’s. World Leaders in automobile manufacturing such as Ford, General Motors, Honda, Toyota, Suzuki, Hyundai, Renault, Mitsubishi, Benz, BMW, Volkswagen and Nissan set up their manufacturing units in India in joint venture with their Indian counterpart companies, by making use of the Foreign Direct Investment policy of the Government of India, These manufacturers started capturing the hearts of Indian car customers with their choice of technological and innovative product features, with quality and reliability. With the multiplicity of choices available to the Indian passenger car buyers, it drastically changed the way the car purchase scenario in India and particularly in the State of Kerala. This transformed the automobile scene from a sellers’ market to buyers’ market. Car customers started developing their own personal preferences and purchasing patterns, which were hitherto unknown in the Indian automobile segment. The main purpose of this paper is to come up with the identification of possible parameters and a framework development, that influence the consumer purchase behaviour patterns of passenger car owners in the State of Kerala, so that further research could be done, based on the framework and the identified parameters.
Resumo:
Adaptive filter is a primary method to filter Electrocardiogram (ECG), because it does not need the signal statistical characteristics. In this paper, an adaptive filtering technique for denoising the ECG based on Genetic Algorithm (GA) tuned Sign-Data Least Mean Square (SD-LMS) algorithm is proposed. This technique minimizes the mean-squared error between the primary input, which is a noisy ECG, and a reference input which can be either noise that is correlated in some way with the noise in the primary input or a signal that is correlated only with ECG in the primary input. Noise is used as the reference signal in this work. The algorithm was applied to the records from the MIT -BIH Arrhythmia database for removing the baseline wander and 60Hz power line interference. The proposed algorithm gave an average signal to noise ratio improvement of 10.75 dB for baseline wander and 24.26 dB for power line interference which is better than the previous reported works
Resumo:
Kriging is an interpolation technique whose optimality criteria are based on normality assumptions either for observed or for transformed data. This is the case of normal, lognormal and multigaussian kriging. When kriging is applied to transformed scores, optimality of obtained estimators becomes a cumbersome concept: back-transformed optimal interpolations in transformed scores are not optimal in the original sample space, and vice-versa. This lack of compatible criteria of optimality induces a variety of problems in both point and block estimates. For instance, lognormal kriging, widely used to interpolate positive variables, has no straightforward way to build consistent and optimal confidence intervals for estimates. These problems are ultimately linked to the assumed space structure of the data support: for instance, positive values, when modelled with lognormal distributions, are assumed to be embedded in the whole real space, with the usual real space structure and Lebesgue measure
Resumo:
There is almost not a case in exploration geology, where the studied data doesn’t includes below detection limits and/or zero values, and since most of the geological data responds to lognormal distributions, these “zero data” represent a mathematical challenge for the interpretation. We need to start by recognizing that there are zero values in geology. For example the amount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-exists with nepheline. Another common essential zero is a North azimuth, however we can always change that zero for the value of 360°. These are known as “Essential zeros”, but what can we do with “Rounded zeros” that are the result of below the detection limit of the equipment? Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimes we need to differentiate between a sodic and a potassic alteration. Pre-classification into groups requires a good knowledge of the distribution of the data and the geochemical characteristics of the groups which is not always available. Considering the zero values equal to the limit of detection of the used equipment will generate spurious distributions, especially in ternary diagrams. Same situation will occur if we replace the zero values by a small amount using non-parametric or parametric techniques (imputation). The method that we are proposing takes into consideration the well known relationships between some elements. For example, in copper porphyry deposits, there is always a good direct correlation between the copper values and the molybdenum ones, but while copper will always be above the limit of detection, many of the molybdenum values will be “rounded zeros”. So, we will take the lower quartile of the real molybdenum values and establish a regression equation with copper, and then we will estimate the “rounded” zero values of molybdenum by their corresponding copper values. The method could be applied to any type of data, provided we establish first their correlation dependency. One of the main advantages of this method is that we do not obtain a fixed value for the “rounded zeros”, but one that depends on the value of the other variable. Key words: compositional data analysis, treatment of zeros, essential zeros, rounded zeros, correlation dependency
Resumo:
Introducción: El glaucoma representa la tercera causa de ceguera a nivel mundial y un diagnóstico oportuno requiere evaluar la excavación del nervio óptico que está relacionada con el área del mismo. Existen reportes de áreas grandes (macrodiscos) que pueden ser protectoras, mientras otros las asocian a susceptibilidad para glaucoma. Objetivo: Establecer si existe asociación entre macrodisco y glaucoma en individuos estudiados con Tomografía Optica Coherente (OCT ) en la Fundación Oftalmológica Nacional. Métodos: Estudio transversal de asociación que incluyó 25 ojos con glaucoma primario de ángulo abierto y 74 ojos sanos. A cada individuo se realizó examen oftalmológico, campo visual computarizado y OCT de nervio óptico. Se compararon por grupos áreas de disco óptico y número de macrodiscos, definidos según Jonas como un área de la media más dos desviaciones estándar y según Adabache como área ≥3.03 mm2 quien evaluó población Mexicana. Resultados: El área promedio de disco óptico fue 2,78 y 2,80 mm2 glaucoma Vs. sanos. De acuerdo al criterio de Jonas, se observó un macrodisco en el grupo sanos y según criterio de Adabache se encontraron ocho y veinticinco macrodiscos glaucoma Vs. sanos. (OR=0,92 IC95%=0.35 – 2.43). Discusión: No hubo diferencia significativa (P=0.870) en el área de disco entre los dos grupos y el porcentaje de macrodiscos para los dos grupos fue similar, aunque el bajo número de éstos no permitió concluir en términos estadísticos sobre la presencia de macrodisco y glaucoma.
Resumo:
Introducción A pesar de que los nevos melanocíticos son un motivo de consulta frecuente en nuestra población no existen estudios a nivel de Colombia acerca de su tratamiento, a nivel mundial existe muy poca literatura al respecto por lo que hay un vacío conceptual en este campo. Objetivos Evaluar los cambios en cuanto a la presencia de pigmento y cicatrización, en los nevos melanocíticos adquiridos tratados con láser, basados en la experiencia de un solo centro en Bogotá. Materiales y métodos Es un estudio observacional de antes y después, en una cohorte histórica, de 90 casos de nevos melanocíticos adquiridos, tratados con láser en Uniláser Medica, en los que se evaluó la presencia de pigmento, cicatrización, y otras variables, con un control realizado a no menos de 3 meses de la intervención. Resultados Se encontró un rango de edad entre los 18 -51 años, promedio 27,59 años; fototipo de III-V; en el 32% de los casos, solo fue requerida una sesión de láser de Co2 y Erbio, para el aclaramiento completo de la misma. La duración del eritema en el 54,4% los casos fue de 1 a 3 meses. En un 64,4% quedó pigmento residual al control, pero de éstos casos el 48,2% fue entre un 5 a un 10% del inicial. El 58,9% hizo cicatriz, de éstos el 63% fue estética. La satisfacción por parte de los pacientes es alta a pesar de la persistencia pigmentaria y/o la presencia de cicatriz. Discusión El tratamiento de nevos melanocíticos adquiridos con láser es una opción terapéutica que genera cambios estadísticamente significativos en cuanto a pigmento, cicatriz estética y alta satisfacción por parte de los pacientes. Se requieren estudios, analíticos, para determinar eficacia del tratamiento.
Resumo:
Objetivo: Describir los eventos ocurridos en el seguimiento a largo plazo de pacientes llevados a cierre percutáneo de FOP y CIA con dispositivo Amplatzer® Materiales y métodos: Estudio de seguimiento en donde se seleccionó una cohorte histórica de pacientes llevados a cierre percutáneo del FOP y CIA con dispositivo Amplatzer desde el año 2001 hasta el 2013, De los 92 (100%) pacientes intervenidos, se realizó seguimiento clínico a 55 (60%) pacientes, y 37 (40%) pacientes no se pudieron contactar, se revisaron registros médicos y se realizarón entrevistas telefónicas. Resultados: La edad promedio de los pacientes fue de 58 años, con una mediana de 62 años, el 73% de las intervenciones fueron realizadas en mujeres. Se realizaron 30 (55%) cierres percutáneos de CIA y 25 (45%) cierres de FOP, se presentaron dos complicaciones secundarias al procedimiento 3.6% (reacción alérgica y hematoma hepático), el diámetro del defecto septal fue 15 mm (DE 9),y una mediana de 16 mm, El tamaño del dispositivo implantado fue de 40 mm (DE 3.9 mm) (13 mm y 34 mm). El seguimiento registró un tiempo promedio de 44 meses (DE 28,6), (7-114 meses) con una mediana de 36 meses, no se registraron eventos, la probabilidad de supervivencia de este grupo de pacientes fue del 100% y la probabilidad de muerte fue del 0%.
Resumo:
Una sitja és una cavitat subterrània destinada a emmagatzemar la collita, especialment de cereals. Amb el manteniment d'unes condicions ideal de temperatura i humitat els cereals s'hi poden conservar durant un llarg període de temps, que segons Varró podria arribar als 50 anys. Aquestes excepcionals possibilitats han possibilitat que l'emmagatzematge en sitges fos un dels mètodes de conservació de cereals a llarg termini més utilitzat en les societats pre-industrials de tot el món. La sitja estàndard del nord-est de Catalunya és aquella que era excavada a l'argila, no portava revestiment i tenia la boca en forma de tub, de 0,77 m de diàmetre màxim per 0,42 de profunditat. El perfil era de tipus còncau, amb el diàmetre màxim situat en el terç central de la sitja, i un fons indistintament còncau o pla. La profunditat i el diàmetre màxim es situarien entre 1,75 i 2 m., amb un marge de diferència reduïdíssim entre ambdues mesures. La capacitat resultant d'aquestes dimensions se situaria entre 1 i 3 tones de cereals, que en termes estàndards de producció seria el resultat de la collita d'una extensió d'entre 1,5 i 4 hectàrees de terreny. ASBTRACT: A silo is an underground cavity designed to store the harvest, especially grain. With the maintenance of ideal conditions of temperature and moisture grains can be preserved for a long period of time, according to Varró it could reach 50 years. These exceptional opportunities have enabled the storage silos to be one of the methods of long-term conservation of grain used in most pre-industrial societies around the world. The standard silo from the North-East of Catalonia was excavated in clay,it had no siding and its mouth was tube-shaped, up to 0.77 m of maximum diameter to 0.42 deep. The profile was concave, with maximum diameter located in the central third of the silo, and a background either concave or flat. The depth and maximum diameter are located between 1.75 and 2 m, with a very little margin of difference between the two measures. The capacity resulting from these dimensions would be located between 1 and 3 tons of cereals, which in terms of production standards it would mean a harvest of between 1.5 and 4 hectares of ground.
Resumo:
The behavior of the Asian summer monsoon is documented and compared using the European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis (ERA) and the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) Reanalysis. In terms of seasonal mean climatologies the results suggest that, in several respects, the ERA is superior to the NCEP-NCAR Reanalysis. The overall better simulation of the precipitation and hence the diabatic heating field over the monsoon domain in ERA means that the analyzed circulation is probably nearer reality. In terms of interannual variability, inconsistencies in the definition of weak and strong monsoon years based on typical monsoon indices such as All-India Rainfall (AIR) anomalies and the large-scale wind shear based dynamical monsoon index (DMI) still exist. Two dominant modes of interannual variability have been identified that together explain nearly 50% of the variance. Individually, they have many features in common with the composite flow patterns associated with weak and strong monsoons, when defined in terms of regional AIR anomalies and the large-scale DMI. The reanalyses also show a common dominant mode of intraseasonal variability that describes the latitudinal displacement of the tropical convergence zone from its oceanic-to-continental regime and essentially captures the low-frequency active/break cycles of the monsoon. The relationship between interannual and intraseasonal variability has been investigated by considering the probability density function (PDF) of the principal component of the dominant intraseasonal mode. Based on the DMI, there is an indication that in years with a weaker monsoon circulation, the PDF is skewed toward negative values (i,e., break conditions). Similarly, the PDFs for El Nino and La Nina years suggest that El Nino predisposes the system to more break spells, although the sample size may limit the statistical significance of the results.
Resumo:
The impact of doubled CO2 concentration on the Asian summer monsoon is studied using a coupled ocean-atmosphere model. Both the mean seasonal precipitation and interannual monsoon variability are found to increase in the future climate scenario presented. Systematic biases in current climate simulations of the coupled system prevent accurate representation of the monsoon-ENSO teleconnection, of prime importance for seasonal prediction and for determining monsoon interannual variability. By applying seasonally varying heat flux adjustments to the tropical Pacific and Indian Ocean surface in the future climate simulation, some assessment can be made of the impact of systematic model biases on future climate predictions. In simulations where the flux adjustments are implemented, the response to climate change is magnified, with the suggestion that systematic biases may be masking the true impact of increased greenhouse gas forcing. The teleconnection between ENSO and the Asian summer monsoon remains robust in the future climate, although the Indo-Pacific takes on more of a biennial character for long periods of the flux-adjusted simulation. Assessing the teleconnection across interdecadal timescales shows wide variations in its amplitude, despite the absence of external forcing. This suggests that recent changes in the observed record cannot be distinguished from internal variations and as such are not necessarily related to climate change.
Combining altimetric/gravimetric and ocean model mean dynamic topography models in the GOCINA region
Resumo:
The atmospheric circulation changes predicted by climate models are often described using sea level pressure, which generally shows a strengthening of the mid-latitude westerlies. Recent observed variability is dominated by the Northern Annular Mode (NAM) which is equivalent barotropic, so that wind variations of the same sign are seen at all levels. However, in model predictions of the response to anthropogenic forcing, there is a well-known enhanced warming at low levels over the northern polar cap in winter. This means that there is a strong baroclinic component to the response. The projection of the response onto a NAM-like zonal index varies with height. While at the surface most models project positively onto the zonal index, throughout most of the depth of the troposphere many of the models give negative projections. The response to anthropogenic forcing therefore has a distinctive baroclinic signature which is very different to the NAM