118 resultados para Proximal methods
Resumo:
Background A higher burden of head and neck cancer has been reported to affect deprived populations. This study assessed the association between socioeconomic status and head and neck cancer, aiming to explore how this association is related to differences of tobacco and alcohol consumption across socioeconomic strata. Methods We conducted a case-control study in Sao Paulo, Brazil (1998-2006), including 1017 incident cases of oral, pharyngeal and laryngeal cancer, and 951 sex- and age-matched controls. Education and occupation were distal determinants in the hierarchical approach; cumulative exposure to tobacco and alcohol were proximal risk factors. Outcomes of the hierarchical model were compared with fully adjusted ORs. Results Individuals with lower education (OR 2.27; 95% CI 1.61 to 3.19) and those performing manual labour (OR 1.55; 95% CI 1.26 to 1.92) had a higher risk of disease. However, 54% of the association with lower education and 45% of the association with manual labour were explained by proximal lifestyle exposures, and socioeconomic status remained significantly associated with disease when adjusted for smoking and alcohol consumption. Conclusions Socioeconomic differences in head and neck cancer are partially attributable to the distribution of tobacco smoking and alcohol consumption across socioeconomic strata. Additional mediating factors may explain the remaining variation of socioeconomic status on head and neck cancer.
Resumo:
Introduction: The occurrence of urolithiasis in pregnancy represents a challenge in both diagnosis and treatment of this condition, because it presents risks not only to the mother but also to the fetus. Surgical treatment may be indicated for patients with infection, persistent pain, and obstruction of a solitary kidney. We present our experience on the management of pregnant patients with ureteral calculi and a review of the literature. Materials and Methods: The charts of 19 pregnant patients with obstructive ureteral calculi were retrospectively reviewed. Gestational age ranged from 13 to 33 weeks. In all patients, ureteral stone was diagnosed on abdominal ultrasound. In regard to localization, 15 calculi were in the distal ureter, 3 in the proximal ureter, and 1 in the interior of an ureterocele. Calculi size ranged from 6 to 10 mm (mean, 8 mm). The following criteria were used to indicate ureteroscopy: persistent pain with no improvement after clinical treatment, increase in renal dilation, or presence of uterine contractions. Nine patients (47.3%) were submitted to ureteroscopy. All calculi (100%) were removed with a stone basket extractor under continuous endoscopic vision. None of the calculi demanded the use of a lithotriptor. Results: Nine patients (47.3%) treated with clinical measurements presented no obstetric complications and spontaneous elimination of the calculi. Nine patients (47.3%) submitted to ureteroscopy had no surgical complications. There was remission of pain in all cases after ureteroscopy and ureteral catheter placement. Conclusion: The diagnosis and treatment of ureteral lithiasis in pregnant women present potential risks for the fetus and the mother. Conservative management is the first option, but ureteroscopy may be performed with safety and high success rates.
Resumo:
Background: Mites (Acari) have traditionally been treated as monophyletic, albeit composed of two major lineages: Acariformes and Parasitiformes. Yet recent studies based on morphology, molecular data, or combinations thereof, have increasingly drawn their monophyly into question. Furthermore, the usually basal (molecular) position of one or both mite lineages among the chelicerates is in conflict to their morphology, and to the widely accepted view that mites are close relatives of Ricinulei. Results: The phylogenetic position of the acariform mites is examined through employing SSU, partial LSU sequences, and morphology from 91 chelicerate extant terminals (forty Acariformes). In a static homology framework, molecular sequences were aligned using their secondary structure as guide, whereby regions of ambiguous alignment were discarded, and pre-aligned sequences analyzed under parsimony and different mixed models in a Bayesian inference. Parsimony and Bayesian analyses led to trees largely congruent concerning infraordinal, well-supported branches, but with low support for inter-ordinal relationships. An exception is Solifugae + Acariformes (P. P = 100%, J. = 0.91). In a dynamic homology framework, two analyses were run: a standard POY analysis and an analysis constrained by secondary structure. Both analyses led to largely congruent trees; supporting a (Palpigradi (Solifugae Acariformes)) clade and Ricinulei as sister group of Tetrapulmonata with the topology (Ricinulei (Amblypygi (Uropygi Araneae))). Combined analysis with two different morphological data matrices were run in order to evaluate the impact of constraining the analysis on the recovered topology when employing secondary structure as a guide for homology establishment. The constrained combined analysis yielded two topologies similar to the exclusively molecular analysis for both morphological matrices, except for the recovery of Pedipalpi instead of the (Uropygi Araneae) clade. The standard (direct optimization) POY analysis, however, led to the recovery of trees differing in the absence of the otherwise well-supported group Solifugae + Acariformes. Conclusions: Previous studies combining ribosomal sequences and morphology often recovered topologies similar to purely morphological analyses of Chelicerata. The apparent stability of certain clades not recovered here, like Haplocnemata and Acari, is regarded as a byproduct of the way the molecular homology was previously established using the instrumentalist approach implemented in POY. Constraining the analysis by a priori homology assessment is defended here as a way of maintaining the severity of the test when adding new data to the analysis. Although the strength of the method advocated here is keeping phylogenetic information from regions usually discarded in an exclusively static homology framework; it still has the inconvenience of being uninformative on the effect of alignment ambiguity on resampling methods of clade support estimation. Finally, putative morphological apomorphies of Solifugae + Acariformes are the reduction of the proximal cheliceral podomere, medial abutting of the leg coxae, loss of sperm nuclear membrane, and presence of differentiated germinative and secretory regions in the testis delivering their products into a common lumen.
Resumo:
Aerosol samples were collected at a pasture site in the Amazon Basin as part of the project LBA-SMOCC-2002 (Large-Scale Biosphere-Atmosphere Experiment in Amazonia - Smoke Aerosols, Clouds, Rainfall and Climate: Aerosols from Biomass Burning Perturb Global and Regional Climate). Sampling was conducted during the late dry season, when the aerosol composition was dominated by biomass burning emissions, especially in the submicron fraction. A 13-stage Dekati low-pressure impactor (DLPI) was used to collect particles with nominal aerodynamic diameters (D(p)) ranging from 0.03 to 0.10 mu m. Gravimetric analyses of the DLPI substrates and filters were performed to obtain aerosol mass concentrations. The concentrations of total, apparent elemental, and organic carbon (TC, EC(a), and OC) were determined using thermal and thermal-optical analysis (TOA) methods. A light transmission method (LTM) was used to determine the concentration of equivalent black carbon (BC(e)) or the absorbing fraction at 880 nm for the size-resolved samples. During the dry period, due to the pervasive presence of fires in the region upwind of the sampling site, concentrations of fine aerosols (D(p) < 2.5 mu m: average 59.8 mu g m(-3)) were higher than coarse aerosols (D(p) > 2.5 mu m: 4.1 mu g m(-3)). Carbonaceous matter, estimated as the sum of the particulate organic matter (i.e., OC x 1.8) plus BC(e), comprised more than 90% to the total aerosol mass. Concentrations of EC(a) (estimated by thermal analysis with a correction for charring) and BC(e) (estimated by LTM) averaged 5.2 +/- 1.3 and 3.1 +/- 0.8 mu g m(-3), respectively. The determination of EC was improved by extracting water-soluble organic material from the samples, which reduced the average light absorption Angstrom exponent of particles in the size range of 0.1 to 1.0 mu m from >2.0 to approximately 1.2. The size-resolved BC(e) measured by the LTM showed a clear maximum between 0.4 and 0.6 mu m in diameter. The concentrations of OC and BC(e) varied diurnally during the dry period, and this variation is related to diurnal changes in boundary layer thickness and in fire frequency.
Resumo:
Background: Mutations in TP53 are common events during carcinogenesis. In addition to gene mutations, several reports have focused on TP53 polymorphisms as risk factors for malignant disease. Many studies have highlighted that the status of the TP53 codon 72 polymorphism could influence cancer susceptibility. However, the results have been inconsistent and various methodological features can contribute to departures from Hardy-Weinberg equilibrium, a condition that may influence the disease risk estimates. The most widely accepted method of detecting genotyping error is to confirm genotypes by sequencing and/or via a separate method. Results: We developed two new genotyping methods for TP53 codon 72 polymorphism detection: Denaturing High Performance Liquid Chromatography (DHPLC) and Dot Blot hybridization. These methods were compared with Restriction Fragment Length Polymorphism (RFLP) using two different restriction enzymes. We observed high agreement among all methodologies assayed. Dot-blot hybridization and DHPLC results were more highly concordant with each other than when either of these methods was compared with RFLP. Conclusions: Although variations may occur, our results indicate that DHPLC and Dot Blot hybridization can be used as reliable screening methods for TP53 codon 72 polymorphism detection, especially in molecular epidemiologic studies, where high throughput methodologies are required.
Resumo:
It has been demonstrated that laser induced breakdown spectrometry (LIBS) can be used as an alternative method for the determination of macro (P, K. Ca, Mg) and micronutrients (B, Fe, Cu, Mn, Zn) in pellets of plant materials. However, information is required regarding the sample preparation for plant analysis by LIBS. In this work, methods involving cryogenic grinding and planetary ball milling were evaluated for leaves comminution before pellets preparation. The particle sizes were associated to chemical sample properties such as fiber and cellulose contents, as well as to pellets porosity and density. The pellets were ablated at 30 different sites by applying 25 laser pulses per site (Nd:YAG@1064 nm, 5 ns, 10 Hz, 25J cm(-2)). The plasma emission collected by lenses was directed through an optical fiber towards a high resolution echelle spectrometer equipped with an ICCD. Delay time and integration time gate were fixed at 2.0 and 4.5 mu s, respectively. Experiments carried out with pellets of sugarcane, orange tree and soy leaves showed a significant effect of the plant species for choosing the most appropriate grinding conditions. By using ball milling with agate materials, 20 min grinding for orange tree and soy, and 60 min for sugarcane leaves led to particle size distributions generally lower than 75 mu m. Cryogenic grinding yielded similar particle size distributions after 10 min for orange tree, 20 min for soy and 30 min for sugarcane leaves. There was up to 50% emission signal enhancement on LIBS measurements for most elements by improving particle size distribution and consequently the pellet porosity. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The aim of this paper is to highlight some of the methods of imagetic information representation, reviewing the literature of the area and proposing a model of methodology adapted to Brazilian museums. An elaboration of a methodology of imagetic information representation is developed based on Brazilian characteristics of information treatment in order to adapt it to museums. Finally, spreadsheets that show this methodology are presented.
Resumo:
ARTIOLI, G. G., B. GUALANO, E. FRANCHINI, F. B. SCAGLIUSI, M. TAKESIAN, M. FUCHS, and A. H. LANCHA. Prevalence, Magnitude, and Methods of Rapid Weight Loss among Judo Competitors. Med. Sci. Sports Exerc., Vol. 42, No. 3, pp. 436-442, 2010. Purpose: To identify the prevalence, magnitude, and methods of rapid weight loss among judo competitors. Methods: Athletes (607 males and 215 females; age = 19.3 +/- 5.3 yr, weight = 70 +/- 7.5 kg, height = 170.6 +/- 9.8 cm) completed a previously validated questionnaire developed to evaluate rapid weight loss in judo athletes, which provides a score. The higher the score obtained, the more aggressive the weight loss behaviors. Data were analyzed using descriptive statistics and frequency analyses. Mean scores obtained in the questionnaire were used to compare specific groups of athletes using, when appropriate, Mann-Whitney U-test or general linear model one-way ANOVA followed by Tamhane post hoc test. Results: Eighty-six percent of athletes reported that have already lost weight to compete. When heavyweights are excluded, this percentage rises to 89%. Most athletes reported reductions of up to 5% of body weight (mean +/- SD: 2.5 +/- 2.3%). The most weight ever lost was 2%-5%, whereas a great part of athletes reported reductions of 5%-10% (mean +/- SD: 6 +/- 4%). The number of reductions underwent in a season was 3 +/- 5. The reductions usually occurred within 7 +/- 7 d. Athletes began cutting weight at 12.6 +/- 6.1 yr. No significant differences were found in the score obtained by male versus female athletes as well as by athletes from different weight classes. Elite athletes scored significantly higher in the questionnaire than nonelite. Athletes who began cutting weight earlier also scored higher than those who began later. Conclusions: Rapid weight loss is highly prevalent in judo competitors. The level of aggressiveness in weight management behaviors seems to not be influenced by the gender or by the weight class, but it seems to be influenced by competitive level and by the age at which athletes began cutting weight.
Resumo:
The aim of the present study was to compare and correlate training impulse (TRIMP) estimates proposed by Banister (TRIMP(Banister)), Stagno (TRIMP(Stagno)) and Manzi (TRIMP(Manzi)). The subjects were submitted to an incremental test on cycle ergometer with heart rate and blood lactate concentration measurements. In the second occasion, they performed 30 min. of exercise at the intensity corresponding to maximal lactate steady state, and TRIMP(Banister), TRIMP(Stagno) and TRIMP(Manzi) were calculated. The mean values of TRIMP(Banister) (56.5 +/- 8.2 u.a.) and TRIMP(Stagno) (51.2 +/- 12.4 u.a.) were not different (P > 0.05) and were highly correlated (r = 0.90). Besides this, they presented a good agreement level, which means low bias and relatively narrow limits of agreement. On the other hand, despite highly correlated (r = 0.93), TRIMP(Stagno) and TRIMP(Manzi) (73.4 +/- 17.6 u.a.) were different (P < 0.05), with low agreement level. The TRIMP(Banister) e TRIMP(Manzi) estimates were not different (P = 0.06) and were highly correlated (r = 0.82), but showed low agreement level. Thus, we concluded that the investigated TRIMP methods are not equivalent. In practical terms, it seems prudent monitor the training process assuming only one of the estimates.
Resumo:
Molybdenum and tungsten bimetallic oxides were synthetized according to the following methods: Pechini, coprecipitation and solid state reaction (SSR). After the characterization, those solids were carbureted at programmed temperature. The carburation process was monitored by checking the consumption of carburant hydrocarbon and CO produced. The monitoring process permits to avoid or to diminish the formation of pirolytic carbon.
Resumo:
Understanding the product`s `end-of-life` is important to reduce the environmental impact of the products` final disposal. When the initial stages of product development consider end-of-life aspects, which can be established by ecodesign (a proactive approach of environmental management that aims to reduce the total environmental impact of products), it becomes easier to close the loop of materials. The `end-of-life` ecodesign methods generally include more than one `end-of-life` strategy. Since product complexity varies substantially, some components, systems or sub-systems are easier to be recycled, reused or remanufactured than others. Remanufacture is an effective way to maintain products in a closed-loop, reducing both environmental impacts and costs of the manufacturing processes. This paper presents some ecodesign methods focused on the integration of different `end-of-life` strategies, with special attention to remanufacturing, given its increasing importance in the international scenario to reduce the life cycle impacts of products. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
We assess the performance of three unconditionally stable finite-difference time-domain (FDTD) methods for the modeling of doubly dispersive metamaterials: 1) locally one-dimensional FDTD; 2) locally one-dimensional FDTD with Strang splitting; and (3) alternating direction implicit FDTD. We use both double-negative media and zero-index media as benchmarks.
Resumo:
The airflow velocities and pressures are calculated from a three-dimensional model of the human larynx by using the finite element method. The laryngeal airflow is assumed to be incompressible, isothermal, steady, and created by fixed pressure drops. The influence of different laryngeal profiles (convergent, parallel, and divergent), glottal area, and dimensions of false vocal folds in the airflow are investigated. The results indicate that vertical and horizontal phase differences in the laryngeal tissue movements are influenced by the nonlinear pressure distribution across the glottal channel, and the glottal entrance shape influences the air pressure distribution inside the glottis. Additionally, the false vocal folds increase the glottal duct pressure drop by creating a new constricted channel in the larynx, and alter the airflow vortexes formed after the true vocal folds. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
On-line leak detection is a main concern for the safe operation of pipelines. Acoustic and mass balance are the most important and extensively applied technologies in field problems. The objective of this work is to compare these leak detection methods with respect to a given reference situation, i.e., the same pipeline and monitoring signals acquired at the inlet and outlet ends. Experimental tests were conducted in a 749 m long laboratory pipeline transporting water as the working fluid. The instrumentation included pressure transducers and electromagnetic flowmeters. Leaks were simulated by opening solenoid valves placed at known positions and previously calibrated to produce known average leak flow rates. Results have clearly shown the limitations and advantages of each method. It is also quite clear that acoustics and mass balance technologies are, in fact, complementary. In general, an acoustic leak detection system sends out an alarm more rapidly and locates the leak more precisely, provided that the rupture of the pipeline occurs abruptly enough. On the other hand, a mass balance leak detection method is capable of quantifying the leak flow rate very accurately and of detecting progressive leaks.