908 resultados para unknown-input estimation
Resumo:
Time periods composing stance phase of gait can be clinically meaningful parameters to reveal differences between normal and pathological gait. This study aimed, first, to describe a novel method for detecting stance and inner-stance temporal events based on foot-worn inertial sensors; second, to extract and validate relevant metrics from those events; and third, to investigate their suitability as clinical outcome for gait evaluations. 42 subjects including healthy subjects and patients before and after surgical treatments for ankle osteoarthritis performed 50-m walking trials while wearing foot-worn inertial sensors and pressure insoles as a reference system. Several hypotheses were evaluated to detect heel-strike, toe-strike, heel-off, and toe-off based on kinematic features. Detected events were compared with the reference system on 3193 gait cycles and showed good accuracy and precision. Absolute and relative stance periods, namely loading response, foot-flat, and push-off were then estimated, validated, and compared statistically between populations. Besides significant differences observed in stance duration, the analysis revealed differing tendencies with notably a shorter foot-flat in healthy subjects. The result indicated which features in inertial sensors' signals should be preferred for detecting precisely and accurately temporal events against a reference standard. The system is suitable for clinical evaluations and provides temporal analysis of gait beyond the common swing/stance decomposition, through a quantitative estimation of inner-stance phases such as foot-flat.
Resumo:
The previously unknown pupa and adult male of Neobezzia fittkaui Wirth & Ratanaworabhan (Diptera, Ceratopogonidae). The pupa of Neobezzia fittkaui Wirth & Ratanaworabhan, 1972, collected from a mat of floating fern (Salvinia auriculata Aubl., Salviniaceae) in Ilha da Marchantaria near Manaus, Brazil and the reared adult male are described, photographed and illustrated for the first time. This is the first detailed pupal description for the genus Neobezzia Wirth & Ratanaworabhan.
Estimates of patient costs related with population morbidity: Can indirect costs affect the results?
Resumo:
A number of health economics works require patient cost estimates as a basic information input.However the accuracy of cost estimates remains in general unspecified. We propose to investigate howthe allocation of indirect costs or overheads can affect the estimation of patient costs in order to allow forimprovements in the analysis of patient costs estimates. Instead of focusing on the costing method, thispaper proposes to highlight changes in variance explained observed when a methodology is chosen. Wecompare three overhead allocation methods for a specific Spanish population adjusted using the ClinicalRisk Groups (CRG), and we obtain different series of full-cost group estimates. As a result, there aresignificant gains in the proportion of the variance explained, depending upon the methodology used.Furthermore, we find that the global amount of variation explained by risk adjustment models dependsmainly on direct costs and is independent of the level of aggregation used in the classification system.
Resumo:
This paper considers a job search model where the environment is notstationary along the unemployment spell and where jobs do not lastforever. Under this circumstance, reservation wages can be lower thanwithout separations, as in a stationary environment, but they can alsobe initially higher because of the non-stationarity of the model. Moreover,the time-dependence of reservation wages is stronger than with noseparations. The model is estimated structurally using Spanish data forthe period 1985-1996. The main finding is that, although the decrease inreservation wages is the main determinant of the change in the exit ratefrom unemployment for the first four months, later on the only effect comesfrom the job offer arrival rate, given that acceptance probabilities areroughly equal to one.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
BACKGROUND: To determine male outpatient attenders' sexual behaviours, expectations and experience of talking about their sexuality and sexual health needs with a doctor. METHODS: A survey was conducted among all male patients aged 18-70, recruited from the two main medical outpatient clinics in Lausanne, Switzerland, in 2005-2006. The anonymous self-administered questionnaire included questions on sexual behaviour, HIV/STI information needs, expectations and experiences regarding discussion of sexual matters with a doctor. RESULTS: The response rate was 53.0% (N = 1452). The mean age was 37.7 years. Overall, 13.4% of patients were defined as at STI risk--i.e. having not consistently used condoms with casual partners in the last 6 months, or with a paid partner during the last intercourse--regarding their sexual behaviour in the last year. 90.9% would have liked their physician to ask them questions concerning their sexual life; only 61.4% had ever had such a discussion. The multivariate analysis showed that patients at risk tended to have the following characteristics: recruited from the HIV testing clinic, lived alone, declared no religion, had a low level of education, felt uninformed about HIV/AIDS, were younger, had had concurrent sexual partners in the last 12 months. However they were not more likely to have discussed sexual matters with their doctor than patients not at risk. CONCLUSION: Recording the sexual history and advice on the prevention of the risks of STI should become routine practice for primary health care doctors.
Resumo:
We construct a weighted Euclidean distance that approximates any distance or dissimilarity measure between individuals that is based on a rectangular cases-by-variables data matrix. In contrast to regular multidimensional scaling methods for dissimilarity data, the method leads to biplots of individuals and variables while preserving all the good properties of dimension-reduction methods that are based on the singular-value decomposition. The main benefits are the decomposition of variance into components along principal axes, which provide the numerical diagnostics known as contributions, and the estimation of nonnegative weights for each variable. The idea is inspired by the distance functions used in correspondence analysis and in principal component analysis of standardized data, where the normalizations inherent in the distances can be considered as differential weighting of the variables. In weighted Euclidean biplots we allow these weights to be unknown parameters, which are estimated from the data to maximize the fit to the chosen distances or dissimilarities. These weights are estimated using a majorization algorithm. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing the matrix and displaying its rows and columns in biplots.
Resumo:
The treatments for ischemic stroke can only be administered in a narrow time-window. However, the ischemia onset time is unknown in ~30% of stroke patients (wake-up strokes). The objective of this study was to determine whether MR spectra of ischemic brains might allow the precise estimation of cerebral ischemia onset time. We modeled ischemic stroke in male ICR-CD1 mice using a permanent middle cerebral artery filament occlusion model with laser Doppler control of the regional cerebral blood flow. Mice were then subjected to repeated MRS measurements of ipsilateral striatum at 14.1 T. A striking initial increase in γ-aminobutyric acid (GABA) and no increase in glutamine were observed. A steady decline was observed for taurine (Tau), N-acetyl-aspartate (NAA) and similarly for the sum of NAA+Tau+glutamate that mimicked an exponential function. The estimation of the time of onset of permanent ischemia within 6 hours in a blinded experiment with mice showed an accuracy of 33±10 minutes. A plot of GABA, Tau, and neuronal marker concentrations against the ratio of acetate/NAA allowed precise separation of mice whose ischemia onset lay within arbitrarily chosen time-windows. We conclude that (1)H-MRS has the potential to detect the clinically relevant time of onset of ischemic stroke.
Resumo:
ABSTRACT Biomass is a fundamental measure for understanding the structure and functioning (e.g. fluxes of energy and nutrients in the food chain) of aquatic ecosystems. We aim to provide predictive models to estimate the biomass of Triplectides egleri Sattler, 1963, in a stream in Central Amazonia, based on body and case dimensions. We used body length, head-capsule width, interocular distance and case length and width to derive biomass estimations. Linear, exponential and power regression models were used to assess the relationship between biomass and body or case dimensions. All regression models used in the biomass estimation of T. egleri were significant. The best fit between biomass and body or case dimensions was obtained using the power model, followed by the exponential and linear models. Body length provided the best estimate of biomass. However, the dimensions of sclerotized structures (interocular distance and head-capsule width) also provided good biomass predictions, and may be useful in estimating biomass of preserved and/or damaged material. Case width was the dimension of the case that provided the best estimate of biomass. Despite the low relation, case width may be useful in studies that require low stress on individuals.
Resumo:
Two methods were evaluated for scaling a set of semivariograms into a unified function for kriging estimation of field-measured properties. Scaling is performed using sample variances and sills of individual semivariograms as scale factors. Theoretical developments show that kriging weights are independent of the scaling factor which appears simply as a constant multiplying both sides of the kriging equations. The scaling techniques were applied to four sets of semivariograms representing spatial scales of 30 x 30 m to 600 x 900 km. Experimental semivariograms in each set successfully coalesced into a single curve by variances and sills of individual semivariograms. To evaluate the scaling techniques, kriged estimates derived from scaled semivariogram models were compared with those derived from unscaled models. Differences in kriged estimates of the order of 5% were found for the cases in which the scaling technique was not successful in coalescing the individual semivariograms, which also means that the spatial variability of these properties is different. The proposed scaling techniques enhance interpretation of semivariograms when a variety of measurements are made at the same location. They also reduce computational times for kriging estimations because kriging weights only need to be calculated for one variable. Weights remain unchanged for all other variables in the data set whose semivariograms are scaled.
Resumo:
Atlas registration is a recognized paradigm for the automatic segmentation of normal MR brain images. Unfortunately, atlas-based segmentation has been of limited use in presence of large space-occupying lesions. In fact, brain deformations induced by such lesions are added to normal anatomical variability and they may dramatically shift and deform anatomically or functionally important brain structures. In this work, we chose to focus on the problem of inter-subject registration of MR images with large tumors, inducing a significant shift of surrounding anatomical structures. First, a brief survey of the existing methods that have been proposed to deal with this problem is presented. This introduces the discussion about the requirements and desirable properties that we consider necessary to be fulfilled by a registration method in this context: To have a dense and smooth deformation field and a model of lesion growth, to model different deformability for some structures, to introduce more prior knowledge, and to use voxel-based features with a similarity measure robust to intensity differences. In a second part of this work, we propose a new approach that overcomes some of the main limitations of the existing techniques while complying with most of the desired requirements above. Our algorithm combines the mathematical framework for computing a variational flow proposed by Hermosillo et al. [G. Hermosillo, C. Chefd'Hotel, O. Faugeras, A variational approach to multi-modal image matching, Tech. Rep., INRIA (February 2001).] with the radial lesion growth pattern presented by Bach et al. [M. Bach Cuadra, C. Pollo, A. Bardera, O. Cuisenaire, J.-G. Villemure, J.-Ph. Thiran, Atlas-based segmentation of pathological MR brain images using a model of lesion growth, IEEE Trans. Med. Imag. 23 (10) (2004) 1301-1314.]. Results on patients with a meningioma are visually assessed and compared to those obtained with the most similar method from the state-of-the-art.
Resumo:
In recent years there has been an explosive growth in the development of adaptive and data driven methods. One of the efficient and data-driven approaches is based on statistical learning theory (Vapnik 1998). The theory is based on Structural Risk Minimisation (SRM) principle and has a solid statistical background. When applying SRM we are trying not only to reduce training error ? to fit the available data with a model, but also to reduce the complexity of the model and to reduce generalisation error. Many nonlinear learning procedures recently developed in neural networks and statistics can be understood and interpreted in terms of the structural risk minimisation inductive principle. A recent methodology based on SRM is called Support Vector Machines (SVM). At present SLT is still under intensive development and SVM find new areas of application (www.kernel-machines.org). SVM develop robust and non linear data models with excellent generalisation abilities that is very important both for monitoring and forecasting. SVM are extremely good when input space is high dimensional and training data set i not big enough to develop corresponding nonlinear model. Moreover, SVM use only support vectors to derive decision boundaries. It opens a way to sampling optimization, estimation of noise in data, quantification of data redundancy etc. Presentation of SVM for spatially distributed data is given in (Kanevski and Maignan 2004).
Resumo:
Waveform-based tomographic imaging of crosshole georadar data is a powerful method to investigate the shallow subsurface because of its ability to provide images of electrical properties in near-surface environments with unprecedented spatial resolution. A critical issue with waveform inversion is the a priori unknown source signal. Indeed, the estimation of the source pulse is notoriously difficult but essential for the effective application of this method. Here, we explore the viability and robustness of a recently proposed deconvolution-based procedure to estimate the source pulse during waveform inversion of crosshole georadar data, where changes in wavelet shape with location as a result of varying near-field conditions and differences in antenna coupling may be significant. Specifically, we examine whether a single, average estimated source current function can adequately represent the pulses radiated at all transmitter locations during a crosshole georadar survey, or whether a separate source wavelet estimation should be performed for each transmitter gather. Tests with synthetic and field data indicate that remarkably good tomographic reconstructions can be obtained using a single estimated source pulse when moderate to strong variability exists in the true source signal with antenna location. Only in the case of very strong variability in the true source pulse are tomographic reconstructions clearly improved by estimating a different source wavelet for each transmitter location.
Resumo:
The geochemical compositions of biogenic carbonates are increasingly used for palaeoenvironmental reconstructions. The skeletal delta O-18 temperature relationship is dependent on water salinity, so many recent studies have focused on the Mg/Ca and Sr/Ca ratios because those ratios in water do not change significantly on short time scales. Thus, those elemental ratios are considered to be good palaeotemperature proxies in many biominerals, although their use remains ambiguous in bivalve shells. Here, we present the high-resolution Mg/Ca ratios of two modern species of juvenile and adult oyster shells, Crassostrea gigas and Ostrea edulis. These specimens were grown in controlled conditions for over one year in two different locations. In situ monthly Mn-marking of the shells has been used for day calibration. The daily Mg/Ca.ratios in the shell have been measured with an electron microprobe. The high frequency Mg/Ca variation of all specimens displays good synchronism with lunar cycles, suggesting that tides strongly influence the incorporation of Mg/Ca into the shells. Highly significant correlation coefficients (0.70<R<0.83, p<0.0001) between the Mg/Ca ratios and the seawater temperature are obtained only for juvenile C. gigas samples, while metabolic control of Mg/Ca incorporation and lower shell growth rates preclude the use of the Mg/Ca ratio in adult shells as a palaeothermometer. Data from three juvenile C. gigas shells from the two study sites are selected to establish a relationship: T = 3.77Mg/Ca + 1.88, where T is in degrees C and Mg/Ca in mmol/mol. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
La sostenibilidad de los recursos marinos y de su ecosistema hace necesario un manejo responsable de las pesquerías. Conocer la distribución espacial del esfuerzo pesquero y en particular de las operaciones de pesca es indispensable para mejorar el monitoreo pesquero y el análisis de la vulnerabilidad de las especies frente a la pesca. Actualmente en la pesquería de anchoveta peruana, se recoge información del esfuerzo y capturas mediante un programa de observadores a bordo, pero esta solo representa una muestra de 2% del total de viajes pesqueros. Por otro lado, se dispone de información por cada hora (en promedio) de la posición de cada barco de la flota gracias al sistema de seguimiento satelital de las embarcaciones (VMS), aunque en estos no se señala cuándo ni dónde ocurrieron las calas. Las redes neuronales artificiales (ANN) podrían ser un método estadístico capaz de inferir esa información, entrenándose en una muestra para la cual sí conocemos las posiciones de calas (el 2% anteriormente referido), estableciendo relaciones analíticas entre las calas y ciertas características geométricas de las trayectorias observadas por el VMS y así, a partir de las últimas, identificar la posición de las operaciones de pesca. La aplicación de la red neuronal requiere un análisis previo que examine la sensibilidad de la red a variaciones en sus parámetros y bases de datos de entrenamiento, y que nos permita desarrollar criterios para definir la estructura de la red e interpretar sus resultados de manera adecuada. La problemática descrita en el párrafo anterior, aplicada específicamente a la anchoveta (Engraulis ringens) es detalllada en el primer capítulo, mientras que en el segundo se hace una revisión teórica de las redes neuronales. Luego se describe el proceso de construcción y pre-tratamiento de la base de datos, y definición de la estructura de la red previa al análisis de sensibilidad. A continuación se presentan los resultados para el análisis en los que obtenemos una estimación del 100% de calas, de las cuales aproximadamente 80% están correctamente ubicadas y 20% poseen un error de ubicación. Finalmente se discuten las fortalezas y debilidades de la técnica empleada, de métodos alternativos potenciales y de las perspectivas abiertas por este trabajo.