975 resultados para Log-log Method
Resumo:
This paper proposes MSISpIC, a probabilistic sonar scan matching algorithm for the localization of an autonomous underwater vehicle (AUV). The technique uses range scans gathered with a Mechanical Scanning Imaging Sonar (MSIS), the robot displacement estimated through dead-reckoning using a Doppler velocity log (DVL) and a motion reference unit (MRU). The proposed method is an extension of the pIC algorithm. An extended Kalman filter (EKF) is used to estimate the robot-path during the scan in order to reference all the range and bearing measurements as well as their uncertainty to a scan fixed frame before registering. The major contribution consists of experimentally proving that probabilistic sonar scan matching techniques have the potential to improve the DVL-based navigation. The algorithm has been tested on an AUV guided along a 600 m path within an abandoned marina underwater environment with satisfactory results
Resumo:
The author studies the error and complexity of the discrete random walk Monte Carlo technique for radiosity, using both the shooting and gathering methods. The author shows that the shooting method exhibits a lower complexity than the gathering one, and under some constraints, it has a linear complexity. This is an improvement over a previous result that pointed to an O(n log n) complexity. The author gives and compares three unbiased estimators for each method, and obtains closed forms and bounds for their variances. The author also bounds the expected value of the mean square error (MSE). Some of the results obtained are also shown
Resumo:
Background. Determine the presence and evolution of indicators microorganisms of water pollution in “Conde del Guadalhorce” reservoir, Málaga city, Spain. A second objective was to analyze pollution degree and evaluate the sanitary quality of bathing water and compliance with European Directive 76/160/CE. Method. A total of 120 water samples were collected in two bathing freshwater sites during May to September sampling period between 2000 to 2005, and the numbers of total coliforms (CT), faecal coliforms (CF) and faecal streptococci (EF) were enumerated using the membrane filtration method. We used the log-normal distribution method and calculate the logarithmic means, percentile points, ratios CF:EF, ANOVA and Pearson correlations. Results. Only two samples overcome CF limit values at Camping sampling station during 2000 year. Ratios CF:EF values were higher (> 4) during 2000 to 2002, and lower (< 0,7) during 2003 to 2005. Significant differences (ANOVA F = 3,41, ∝ < 0,01) was only observed with EF during evaluated period. There was no significant difference between concentration means at bathing water sites (ANOVA, F = 3,395, ∝ < 0,01). The counts of CT and CF were significantly correlated in Kiosko water samples, while in Camping water, significant correlation (t = 0,632, p < 0,05) was only observed with EF at the Camping station during 2000, 2003 and 2005 years. Conclusions. “Conde del Guadalhorce” reservoir showed hygienic conditions for safety bathing. Globally, water bathing quality is good. CT, CF y EF indicators were agreed with UE Directive during 2000- 2005, with exception CF at Camping station in 2000 year. CT y CF concentrations at Camping were frecuently higher than Kiosko, it could be caused to swimmers abundance and recreational activities. There was a trend towards rising EF, it could be caused to faecal pollution source of animal origin, needed to research it.
Resumo:
BACKGROUND AND PURPOSE: The ASTRAL score was externally validated showing remarkable consistency on 3-month outcome prognosis in patients with acute ischemic stroke. The present study aimed to evaluate ASTRAL score's prognostic accuracy to predict 5-year outcome. METHODS: All consecutive patients with acute ischemic stroke registered in the Athens Stroke Registry between January 1, 1998, and December 31, 2010, were included. Patients were excluded if admitted >24 hours after symptom onset or if any ASTRAL score component was missing. End points were 5-year unfavorable functional outcome, defined as modified Rankin Scale 3 to 6, and 5-year mortality. For each outcome, the area under the receiver operating characteristics curve was calculated; also, a multivariate Cox proportional hazards analysis was performed to investigate whether the ASTRAL score was an independent predictor of outcome. The Kaplan-Meier product limit method was used to estimate the probability of 5-year survival for each ASTRAL score quartile. RESULTS: The area under the receiver operating characteristics curve of the score to predict 5-year unfavorable functional outcome was 0.89, 95% confidence interval 0.88 to 0.91. In multivariate Cox proportional hazards analysis, the ASTRAL score was independently associated with 5-year unfavorable functional outcome (hazard ratio, 1.09; 95% confidence interval, 1.08-1.10). The area under the receiver operating characteristics curve for the ASTRAL score's discriminatory power to predict 5-year mortality was 0.81 (95% confidence interval, 0.78-0.83). In multivariate analysis, the ASTRAL score was independently associated with 5-year mortality (hazard ratio, 1.09, 95% confidence interval, 1.08-1.10). During the 5-year follow-up, the probability of survival was significantly lower with increasing ASTRAL score quartiles (log-rank test <0.001). CONCLUSIONS: The ASTRAL score reliably predicts 5-year functional outcome and mortality in patients with acute ischemic stroke.
Resumo:
STATEMENT OF PROBLEM: Wear of methacrylate artificial teeth resulting in vertical loss is a problem for both dentists and patients. PURPOSE: The purpose of this study was to quantify wear of artificial teeth in vivo and to relate it to subject and tooth variables. MATERIAL AND METHODS: Twenty-eight subjects treated with complete dentures received 2 artificial tooth materials (polymethyl methacrylate (PMMA)/double-cross linked PMMA fillers; 35%/59% (SR Antaris DCL, SR Postaris DCL); experimental 48%/46%). At baseline and after 12 months, impressions of the dentures were poured with improved stone. After laser scanning, the casts were superimposed and matched. Maximal vertical loss (mm) and volumetric loss (mm(3)) were calculated for each tooth and log-transformed to reduce variability. Volumetric loss was related to the occlusally active surface area. Linear mixed models were used to study the influence of the factors jaw, tooth, and material on adjusted (residual) wear values (alpha=.05). RESULTS: Due to drop outs (n=5) and unmatchable casts (n=3), 69% of all teeth were analyzed. Volumetric loss had a strong linear relationship to surface area (P<.001); this was less pronounced for vertical loss (P=.004). The factor showing the highest influence was the subject. Wear was tooth dependent (increasing from incisors to molars). However, these differences diminished once the wear rates were adjusted for occlusal area, and only a few remained significant (anterior versus posterior maxillary teeth). Another influencing factor was the age of the subject. CONCLUSIONS: Clinical wear of artificial teeth is higher than previously measured or expected. The presented method of analyzing wear of artificial teeth using a laser-scanning device seemed suitable.
Resumo:
Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.
Resumo:
Power transformations of positive data tables, prior to applying the correspondence analysis algorithm, are shown to open up a family of methods with direct connections to the analysis of log-ratios. Two variations of this idea are illustrated. The first approach is simply to power the original data and perform a correspondence analysis this method is shown to converge to unweighted log-ratio analysis as the power parameter tends to zero. The second approach is to apply the power transformation to thecontingency ratios, that is the values in the table relative to expected values based on the marginals this method converges to weighted log-ratio analysis, or the spectral map. Two applications are described: first, a matrix of population genetic data which is inherently two-dimensional, and second, a larger cross-tabulation with higher dimensionality, from a linguistic analysis of several books.
Resumo:
In order to interpret the biplot it is necessary to know which points usually variables are the ones that are important contributors to the solution, and this information is available separately as part of the biplot s numerical results. We propose a new scaling of the display, called the contribution biplot, which incorporates this diagnostic directly into the graphical display, showing visually the important contributors and thus facilitating the biplot interpretation and often simplifying the graphical representation considerably. The contribution biplot can be applied to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. In the contribution biplot one set of points, usually the rows of the data matrix, optimally represent the spatial positions of the cases or sample units, according to some distance measure that usually incorporates some form of standardization unless all data are comparable in scale. The other set of points, usually the columns, is represented by vectors that are related to their contributions to the low-dimensional solution. A fringe benefit is that usually only one common scale for row and column points is needed on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot legible. Furthermore, this version of the biplot also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important, when they are in fact contributing minimally to the solution.
Resumo:
We consider two fundamental properties in the analysis of two-way tables of positive data: the principle of distributional equivalence, one of the cornerstones of correspondence analysis of contingency tables, and the principle of subcompositional coherence, which forms the basis of compositional data analysis. For an analysis to be subcompositionally coherent, it suffices to analyse the ratios of the data values. The usual approach to dimension reduction in compositional data analysis is to perform principal component analysis on the logarithms of ratios, but this method does not obey the principle of distributional equivalence. We show that by introducing weights for the rows and columns, the method achieves this desirable property. This weighted log-ratio analysis is theoretically equivalent to spectral mapping , a multivariate method developed almost 30 years ago for displaying ratio-scale data from biological activity spectra. The close relationship between spectral mapping and correspondence analysis is also explained, as well as their connection with association modelling. The weighted log-ratio methodology is applied here to frequency data in linguistics and to chemical compositional data in archaeology.
Resumo:
The n-octanol/water partition coefficient (log Po/w) is a key physicochemical parameter for drug discovery, design, and development. Here, we present a physics-based approach that shows a strong linear correlation between the computed solvation free energy in implicit solvents and the experimental log Po/w on a cleansed data set of more than 17,500 molecules. After internal validation by five-fold cross-validation and data randomization, the predictive power of the most interesting multiple linear model, based on two GB/SA parameters solely, was tested on two different external sets of molecules. On the Martel druglike test set, the predictive power of the best model (N = 706, r = 0.64, MAE = 1.18, and RMSE = 1.40) is similar to six well-established empirical methods. On the 17-drug test set, our model outperformed all compared empirical methodologies (N = 17, r = 0.94, MAE = 0.38, and RMSE = 0.52). The physical basis of our original GB/SA approach together with its predictive capacity, computational efficiency (1 to 2 s per molecule), and tridimensional molecular graphics capability lay the foundations for a promising predictor, the implicit log P method (iLOGP), to complement the portfolio of drug design tools developed and provided by the SIB Swiss Institute of Bioinformatics.
Resumo:
This paper compares two well known scan matching algorithms: the MbICP and the pIC. As a result of the study, it is proposed the MSISpIC, a probabilistic scan matching algorithm for the localization of an Autonomous Underwater Vehicle (AUV). The technique uses range scans gathered with a Mechanical Scanning Imaging Sonar (MSIS), and the robot displacement estimated through dead-reckoning with the help of a Doppler Velocity Log (DVL) and a Motion Reference Unit (MRU). The proposed method is an extension of the pIC algorithm. Its major contribution consists in: 1) using an EKF to estimate the local path traveled by the robot while grabbing the scan as well as its uncertainty and 2) proposing a method to group into a unique scan, with a convenient uncertainty model, all the data grabbed along the path described by the robot. The algorithm has been tested on an AUV guided along a 600m path within a marina environment with satisfactory results
Resumo:
OBJECTIVES: To determine the epidemiology of biliary atresia (BA) in Switzerland, the outcome of the children from diagnosis, and the prognostic factors. PATIENTS AND METHODS: The records of all patients with BA born in Switzerland between January 1994 and December 2004 were analyzed. Survival rates were calculated with the Kaplan-Meier method, and prognostic factors evaluated with the log rank test. Median follow up was 58 months (range, 5-124). RESULTS: BA was diagnosed in 48 children. Incidence was 1 in 17,800 live births (95% confidence interval 1/13,900-1/24,800), without significant regional, annual, or seasonal variation. Forty-three children underwent a Kasai portoenterostomy (PE) in 5 different Swiss pediatric surgery units. Median age at Kasai PE was 68 days (range, 30-126). Four-year survival with native liver after Kasai PE was 37.4%. Liver transplantation (LT) was needed in 31 in 48 children with BA, including 5 patients without previous Kasai PE. Four patients (8%, all born before 2001) died while waiting for LT, and 29 LT were performed in 27 patients (28 in Geneva and 1 in Paris). All of the transplanted patients are alive. Four-year overall BA patient survival was 91.7%. Four-year survival with native liver was 75% in patients who underwent Kasai PE before 46 days, 33% in patients operated on between 46 and 75 days, and 11% in patients operated on after 75 days (P = 0.02). CONCLUSIONS: Overall survival of patients with BA in Switzerland compares favorably with current international standards, whereas results of the Kasai operation could be improved to reduce the need for LTs in infancy and early childhood.
Resumo:
BACKGROUND: The efficacy of cardiac pacing for prevention of syncopal recurrences in patients with neurally mediated syncope is controversial. We wanted to determine whether pacing therapy reduces syncopal recurrences in patients with severe asystolic neurally mediated syncope. METHODS AND RESULTS: Double-blind, randomized placebo-controlled study conducted in 29 centers in the Third International Study on Syncope of Uncertain Etiology (ISSUE-3) trial. Patients were ≥40 years, had experienced ≥3 syncopal episodes in the previous 2 years. Initially, 511 patients, received an implantable loop recorder; 89 of these had documentation of syncope with ≥3 s asystole or ≥6 s asystole without syncope within 12 ± 10 months and met criteria for pacemaker implantation; 77 of 89 patients were randomly assigned to dual-chamber pacing with rate drop response or to sensing only. The data were analyzed on intention-to-treat principle. There was syncope recurrence during follow-up in 27 patients, 19 of whom had been assigned to pacemaker OFF and 8 to pacemaker ON. The 2-year estimated syncope recurrence rate was 57% (95% CI, 40-74) with pacemaker OFF and 25% (95% CI, 13-45) with pacemaker ON (log rank: P=0.039 at the threshold of statistical significance of 0.04). The risk of recurrence was reduced by 57% (95% CI, 4-81). Five patients had procedural complications: lead dislodgment in 4 requiring correction and subclavian vein thrombosis in 1 patient. CONCLUSIONS: Dual-chamber permanent pacing is effective in reducing recurrence of syncope in patients ≥40 years with severe asystolic neurally mediated syncope. The observed 32% absolute and 57% relative reduction in syncope recurrence support this invasive treatment for the relatively benign neurally mediated syncope. CLINICAL TRIAL REGISTRATION: URL: http://www.clinicaltrials.gov. Unique identifier: NCT00359203.
Resumo:
Ga(3+) is a semimetal element that competes for the iron-binding sites of transporters and enzymes. We investigated the activity of gallium maltolate (GaM), an organic gallium salt with high solubility, against laboratory and clinical strains of methicillin-susceptible Staphylococcus aureus (MSSA), methicillin-resistant S. aureus (MRSA), methicillin-susceptible Staphylococcus epidermidis (MSSE), and methicillin-resistant S. epidermidis (MRSE) in logarithmic or stationary phase and in biofilms. The MICs of GaM were higher for S. aureus (375 to 2000 microg/ml) than S. epidermidis (94 to 200 microg/ml). Minimal biofilm inhibitory concentrations were 3,000 to >or=6,000 microg/ml (S. aureus) and 94 to 3,000 microg/ml (S. epidermidis). In time-kill studies, GaM exhibited a slow and dose-dependent killing, with maximal action at 24 h against S. aureus of 1.9 log(10) CFU/ml (MSSA) and 3.3 log(10) CFU/ml (MRSA) at 3x MIC and 2.9 log(10) CFU/ml (MSSE) and 4.0 log(10) CFU/ml (MRSE) against S. epidermidis at 10x MIC. In calorimetric studies, growth-related heat production was inhibited by GaM at subinhibitory concentrations; and the minimal heat inhibition concentrations were 188 to 4,500 microg/ml (MSSA), 94 to 1,500 microg/ml (MRSA), and 94 to 375 microg/ml (MSSE and MRSE), which correlated well with the MICs. Thus, calorimetry was a fast, accurate, and simple method useful for investigation of antimicrobial activity at subinhibitory concentrations. In conclusion, GaM exhibited activity against staphylococci in different growth phases, including in stationary phase and biofilms, but high concentrations were required. These data support the potential topical use of GaM, including its use for the treatment of wound infections, MRSA decolonization, and coating of implants.
Resumo:
Geophysical techniques can help to bridge the inherent gap with regard to spatial resolution and the range of coverage that plagues classical hydrological methods. This has lead to the emergence of the new and rapidly growing field of hydrogeophysics. Given the differing sensitivities of various geophysical techniques to hydrologically relevant parameters and their inherent trade-off between resolution and range the fundamental usefulness of multi-method hydrogeophysical surveys for reducing uncertainties in data analysis and interpretation is widely accepted. A major challenge arising from such endeavors is the quantitative integration of the resulting vast and diverse database in order to obtain a unified model of the probed subsurface region that is internally consistent with all available data. To address this problem, we have developed a strategy towards hydrogeophysical data integration based on Monte-Carlo-type conditional stochastic simulation that we consider to be particularly suitable for local-scale studies characterized by high-resolution and high-quality datasets. Monte-Carlo-based optimization techniques are flexible and versatile, allow for accounting for a wide variety of data and constraints of differing resolution and hardness and thus have the potential of providing, in a geostatistical sense, highly detailed and realistic models of the pertinent target parameter distributions. Compared to more conventional approaches of this kind, our approach provides significant advancements in the way that the larger-scale deterministic information resolved by the hydrogeophysical data can be accounted for, which represents an inherently problematic, and as of yet unresolved, aspect of Monte-Carlo-type conditional simulation techniques. We present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on pertinent synthetic data and then applied to corresponding field data collected at the Boise Hydrogeophysical Research Site near Boise, Idaho, USA.