994 resultados para Flatness deviation
Resumo:
Background In an agreement assay, it is of interest to evaluate the degree of agreement between the different methods (devices, instruments or observers) used to measure the same characteristic. We propose in this study a technical simplification for inference about the total deviation index (TDI) estimate to assess agreement between two devices of normally-distributed measurements and describe its utility to evaluate inter- and intra-rater agreement if more than one reading per subject is available for each device. Methods We propose to estimate the TDI by constructing a probability interval of the difference in paired measurements between devices, and thereafter, we derive a tolerance interval (TI) procedure as a natural way to make inferences about probability limit estimates. We also describe how the proposed method can be used to compute bounds of the coverage probability. Results The approach is illustrated in a real case example where the agreement between two instruments, a handle mercury sphygmomanometer device and an OMRON 711 automatic device, is assessed in a sample of 384 subjects where measures of systolic blood pressure were taken twice by each device. A simulation study procedure is implemented to evaluate and compare the accuracy of the approach to two already established methods, showing that the TI approximation produces accurate empirical confidence levels which are reasonably close to the nominal confidence level. Conclusions The method proposed is straightforward since the TDI estimate is derived directly from a probability interval of a normally-distributed variable in its original scale, without further transformations. Thereafter, a natural way of making inferences about this estimate is to derive the appropriate TI. Constructions of TI based on normal populations are implemented in most standard statistical packages, thus making it simpler for any practitioner to implement our proposal to assess agreement.
Resumo:
Electroencephalographic (EEG) recordings are, most of the times, corrupted by spurious artifacts, which should be rejected or cleaned by the practitioner. As human scalp EEG screening is error-prone, automatic artifact detection is an issue of capital importance, to ensure objective and reliable results. In this paper we propose a new approach for discrimination of muscular activity in the human scalp quantitative EEG (QEEG), based on the time-frequency shape analysis. The impact of the muscular activity on the EEG can be evaluated from this methodology. We present an application of this scoring as a preprocessing step for EEG signal analysis, in order to evaluate the amount of muscular activity for two set of EEG recordings for dementia patients with early stage of Alzheimer’s disease and control age-matched subjects.
Resumo:
Abstract: Although frequently in humans, hypoxic and ischemic heart diseases are poorly documented in dogs, with only few reports of acute myocardial infarction (AMI) in this species. Some electrocardiographic findings might suggest myocardium hypoxia/ischemia, like ST segment elevation or depression, but there are no studies showing whether deviations in ST segment are associated to myocardial injury and serum increase of creatine phosphokinase (CPK-MB). In order to investigate possible myocardial cells injury in poor perfusion conditions, 38 dogs were studied, 20 with normal electrocardiogram and 18 with ST segment elevation or depression, recorded in lead II, at a paper speed of 50 mm/sec and N sensibility (1mV=1cm). Serum measurement of creatine phosphokinase isoenzyme MB (CPK-MB) in normal dogs (group 1) determined control values (in ng/mL), which were compared to those obtained from dogs with deviation (group 2), which allowed confirmation or not of myocardial injury. CPK-MB mean values obtained from dogs in groups 1 and 2 were 0.540ng/ml (SD±0.890)ng/mL and 0.440ng/mL (SD±1.106), respectively. At a significance level of 5%, the relation of CPK-MB with age, mass and total creatine phosphokinase (CPK-T) was not significant in groups 1 and 2. CPK-MB showed no difference, at 5% level, between groups 1 and 2. In conclusion, it is possible to use the human chemiluminescent immunometric assay kit in canine species and that hypoxia/ischemia revealed by ST segment deviation does not mean significant myocardium injury.
Resumo:
This work presents recent results concerning a design methodology used to estimate the positioning deviation for a gantry (Cartesian) manipulator, related mainly to structural elastic deformation of components during operational conditions. The case-study manipulator is classified as gantry type and its basic dimensions are 1,53m x 0,97m x 1,38m. The dimensions used for the calculation of effective workspace due to end-effector path displacement are: 1m x 0,5m x 0,5m. The manipulator is composed by four basic modules defined as module X, module Y, module Z and terminal arm, where is connected the end-effector. Each module controlled axis performs a linear-parabolic positioning movement. The planning path algorithm has the maximum velocity and the total distance as input parameters for a given task. The acceleration and deceleration times are the same. Denavit-Hartemberg parameterization method is used in the manipulator kinematics model. The gantry manipulator can be modeled as four rigid bodies with three degrees-of-freedom in translational movements, connected as an open kinematics chain. Dynamic analysis were performed considering inertial parameters specification such as component mass, inertia and center of gravity position of each module. These parameters are essential for a correct manipulator dynamic modelling, due to multiple possibilities of motion and manipulation of objects with different masses. The dynamic analysis consists of a mathematical modelling of the static and dynamic interactions among the modules. The computation of the structural deformations uses the finite element method (FEM).
Resumo:
Nimeketiedot nimiönkehyksissä
Resumo:
Tesis (Doctor en Ingeniería Eléctrica) UANL, 2013.
Resumo:
Le suivi thérapeutique est recommandé pour l’ajustement de la dose des agents immunosuppresseurs. La pertinence de l’utilisation de la surface sous la courbe (SSC) comme biomarqueur dans l’exercice du suivi thérapeutique de la cyclosporine (CsA) dans la transplantation des cellules souches hématopoïétiques est soutenue par un nombre croissant d’études. Cependant, pour des raisons intrinsèques à la méthode de calcul de la SSC, son utilisation en milieu clinique n’est pas pratique. Les stratégies d’échantillonnage limitées, basées sur des approches de régression (R-LSS) ou des approches Bayésiennes (B-LSS), représentent des alternatives pratiques pour une estimation satisfaisante de la SSC. Cependant, pour une application efficace de ces méthodologies, leur conception doit accommoder la réalité clinique, notamment en requérant un nombre minimal de concentrations échelonnées sur une courte durée d’échantillonnage. De plus, une attention particulière devrait être accordée à assurer leur développement et validation adéquates. Il est aussi important de mentionner que l’irrégularité dans le temps de la collecte des échantillons sanguins peut avoir un impact non-négligeable sur la performance prédictive des R-LSS. Or, à ce jour, cet impact n’a fait l’objet d’aucune étude. Cette thèse de doctorat se penche sur ces problématiques afin de permettre une estimation précise et pratique de la SSC. Ces études ont été effectuées dans le cadre de l’utilisation de la CsA chez des patients pédiatriques ayant subi une greffe de cellules souches hématopoïétiques. D’abord, des approches de régression multiple ainsi que d’analyse pharmacocinétique de population (Pop-PK) ont été utilisées de façon constructive afin de développer et de valider adéquatement des LSS. Ensuite, plusieurs modèles Pop-PK ont été évalués, tout en gardant à l’esprit leur utilisation prévue dans le contexte de l’estimation de la SSC. Aussi, la performance des B-LSS ciblant différentes versions de SSC a également été étudiée. Enfin, l’impact des écarts entre les temps d’échantillonnage sanguins réels et les temps nominaux planifiés, sur la performance de prédiction des R-LSS a été quantifié en utilisant une approche de simulation qui considère des scénarios diversifiés et réalistes représentant des erreurs potentielles dans la cédule des échantillons sanguins. Ainsi, cette étude a d’abord conduit au développement de R-LSS et B-LSS ayant une performance clinique satisfaisante, et qui sont pratiques puisqu’elles impliquent 4 points d’échantillonnage ou moins obtenus dans les 4 heures post-dose. Une fois l’analyse Pop-PK effectuée, un modèle structural à deux compartiments avec un temps de délai a été retenu. Cependant, le modèle final - notamment avec covariables - n’a pas amélioré la performance des B-LSS comparativement aux modèles structuraux (sans covariables). En outre, nous avons démontré que les B-LSS exhibent une meilleure performance pour la SSC dérivée des concentrations simulées qui excluent les erreurs résiduelles, que nous avons nommée « underlying AUC », comparée à la SSC observée qui est directement calculée à partir des concentrations mesurées. Enfin, nos résultats ont prouvé que l’irrégularité des temps de la collecte des échantillons sanguins a un impact important sur la performance prédictive des R-LSS; cet impact est en fonction du nombre des échantillons requis, mais encore davantage en fonction de la durée du processus d’échantillonnage impliqué. Nous avons aussi mis en évidence que les erreurs d’échantillonnage commises aux moments où la concentration change rapidement sont celles qui affectent le plus le pouvoir prédictif des R-LSS. Plus intéressant, nous avons mis en exergue que même si différentes R-LSS peuvent avoir des performances similaires lorsque basées sur des temps nominaux, leurs tolérances aux erreurs des temps d’échantillonnage peuvent largement différer. En fait, une considération adéquate de l'impact de ces erreurs peut conduire à une sélection et une utilisation plus fiables des R-LSS. Par une investigation approfondie de différents aspects sous-jacents aux stratégies d’échantillonnages limités, cette thèse a pu fournir des améliorations méthodologiques notables, et proposer de nouvelles voies pour assurer leur utilisation de façon fiable et informée, tout en favorisant leur adéquation à la pratique clinique.
Resumo:
Fine particles of cobalt ferrite were synthesized by the sol–gel method. Subsequent heat treatment at different temperatures yielded cobalt ferrites having different grain sizes. X-ray diffraction studies were carried out to elucidate the structure of all the samples. Dielectric permittivity and ac conductivity of all the samples were evaluated as a function of frequency, temperature and grain size. The variation of permittivity and ac conductivity with frequency reveals that the dispersion is due to Maxwell–Wagner type interfacial polarization in general, with a noted variation from the expected behaviour for the cold synthesized samples. High permittivity and conductivity for small grains were explained on the basis of the correlated barrier-hopping model
Resumo:
The paroxysmal upgaze deviation is a syndrome that described in infants for first time in 1988; there are just about 50 case reports worldwide ever since. Its etiology is unclear and though it prognosis is variable; most case reports indicate that during growth the episodes tend to decrease in frequency and duration until they disappear. It describes a 16-months old male child who since 11-months old presented many episodes of variable conjugate upward deviation of the eyes, compensatory neck flexion and down-beat saccades in attempted downgaze. These events are predominantly diurnal, and are exacerbated by stressful situations such as fasting or insomnia, however and improve with sleep. They have normal neurologic and ophthalmologic examination, and neuroimaging and EEG findings are not relevant.
Resumo:
Background: Symbiotic relationships have contributed to major evolutionary innovations, the maintenance of fundamental ecosystem functions, and the generation and maintenance of biodiversity. However, the exact nature of host/symbiont associations, which has important consequences for their dynamics, is often poorly known due to limited understanding of symbiont taxonomy and species diversity. Among classical symbioses, figs and their pollinating wasps constitute a highly diverse keystone resource in tropical forest and savannah environments. Historically, they were considered to exemplify extreme reciprocal partner specificity (one-to-one host-symbiont species relationships), but recent work has revealed several more complex cases. However, there is a striking lack of studies with the specific aims of assessing symbiont diversity and how this varies across the geographic range of the host. Results: Here, we use molecular methods to investigate cryptic diversity in the pollinating wasps of a widespread Australian fig species. Standard barcoding genes and methods were not conclusive, but incorporation of phylogenetic analyses and a recently developed nuclear barcoding gene (ITS2), gave strong support for five pollinator species. Each pollinator species was most common in a different geographic region, emphasising the importance of wide geographic sampling to uncover diversity, and the scope for divergence in coevolutionary trajectories across the host plant range. In addition, most regions had multiple coexisting pollinators, raising the question of how they coexist in apparently similar or identical resource niches. Conclusion: Our study offers a striking example of extreme deviation from reciprocal partner specificity over the full geographical range of a fig-wasp system. It also suggests that superficially identical species may be able to co-exist in a mutualistic setting albeit at different frequencies in relation to their fig host’s range. We show that comprehensive sampling and molecular taxonomic techniques may be required to uncover the true structure of cryptic biodiversity underpinning intimate ecological interactions.
Resumo:
We construct a quasi-sure version (in the sense of Malliavin) of geometric rough paths associated with a Gaussian process with long-time memory. As an application we establish a large deviation principle (LDP) for capacities for such Gaussian rough paths. Together with Lyons' universal limit theorem, our results yield immediately the corresponding results for pathwise solutions to stochastic differential equations driven by such Gaussian process in the sense of rough paths. Moreover, our LDP result implies the result of Yoshida on the LDP for capacities over the abstract Wiener space associated with such Gaussian process.
Resumo:
When exploring new perspectives on the impact of non-idealized vs. idealized body image in advertising, studies have focused mainly on body size, i.e., thin vs. heavy (Antioco et al., 2012; Smeesters & Mandel, 2006). Age remains largely unexplored, and the vast majority of ads in the market depict young models. The purpose of this research is therefore to investigate which images in advertisements – young or mature models – are more persuasive for older women (40+ years old). In this investigation, two studies were conducted. The first part was an exploratory analysis with a qualitative approach, which in turn helped to formulate the hypothesis tested in the subsequent experiment. The results of the in-depth interviews suggested a conflict over notions of imprisonment (need to follow beauty standards) and freedom (wish to deviate). The results of the experiment showed essentially that among older consumers, ads portraying older models were as persuasive as ads portraying younger models. Limitations and future research are discussed.