999 resultados para erroneous data


Relevância:

70.00% 70.00%

Publicador:

Resumo:

I present results of my evaluation to identify topographic lineaments that are potentially related to post-glacial faulting using bare-earth LiDAR topographic data near Ridley Island, British Columbia. The purpose of this evaluation has been to review bare-earth LiDAR data for evidence of post-glacial faulting in the area surrounding Ridley Island and provide a map of the potential faults to review and possibly field check. My work consisted of an extensive literature review to understand the tectonic, geologic, glacial and sea level history of the area and analysis of bare-earth LiDAR data for Ridley Island and the surrounding region. Ridley Island and the surrounding north coast of British Columbia have a long and complex tectonic and geologic history. The north coast of British Columbia consists of a series of accreted terranes and some post-accretionary deposits. The accreted terranes were attached to the North American continent during subduction of the Pacific Plate between approximately 200 Ma and 10 Ma. The terrane and post-accretionary deposits are metamorphosed sedimentary, volcanic and intrusive rocks. The rocks have experienced significant deformation and been intruded by plutonic bodies. Approximately 10 Ma subduction of the Pacific Plate beneath the North America Plate ceased along the central and north coast of British Columbia and the Queen Charlotte Fault Zone was formed. The Queen Charlotte Fault Zone is a transform-type fault that separates the Pacific Plate from the North America Plate. Within the past 1 million years, the area has experienced multiple glacial/interglacial cycles. The most recent glacial cycle occurred approximately 23,000 to 13,500 years ago. Few Quaternary deposits have been mapped in the area. The vast majority of seismicity around the northwest coast of British Columbia occurs along the Queen Charlotte Fault Zone. Numerous faults have been mapped in the area, but there is currently no evidence to suggest these faults are active (i.e. have evidence for post-glacial surface displacement or deformation). No earthquakes have been recorded within 50 km of Ridley Island. Several small earthquakes (less than magnitude 6) have been recorded within 100 km of the island. These earthquakes have not been correlated to active faults. GPS data suggests there is ongoing strain in the vicinity of Ridley Island. The strain has the potential to be released along faults, but the calculated strain may be a result of erroneous data or accommodated aseismically. Currently, the greatest known seismic hazard to Ridley Island is the Queen Charlotte Fault Zone. LiDAR data for Ridley Island, Digby Island, Lelu Island and portions of Kaien Island, Smith Island and the British Columbia mainland were reviewed and analyzed for evidence of postglacial faulting. The data showed a strong fabric across the landscape with a northwest-southeast trend that appears to mirror the observed foliation in the area. A total of 80 potential post-glacial faults were identified. Three lineaments are categorized as high, forty-one lineaments are categorized as medium and thirty-six lineaments are categorized as low. The identified features should be examined in the field to further assess potential activity. My analysis did not include areas outside of the LiDAR coverage; however faulting may be present there. LiDAR data analysis is only useful for detecting faults with surficial expressions. Faulting without obvious surficial expressions may be present in the study area.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

SARAL/AltiKa GDR-T are analyzed to assess the quality of the significant wave height (SWH) measurements. SARAL along-track SWH plots reveal cases of erroneous data, more or less isolated, not detected by the quality flags. The anomalies are often correlated with strong attenuation of the Ka-band backscatter coefficient, sensitive to clouds and rain. A quality test based on the 1Hz standard deviation is proposed to detect such anomalies. From buoy comparison, it is shown that SARAL SWH is more accurate than Jason-2, particularly at low SWH, and globally does not require any correction. Results are better with open ocean than with coastal buoys. The scatter and the number of outliers are much larger for coastal buoys. SARAL is then compared with Jason-2 and Cryosat-2. The altimeter data are extracted from the global altimeter SWH Ifremer data base, including specific corrections to calibrate the various altimeters. The comparison confirms the high quality of SARAL SWH. The 1Hz standard deviation is much less than for Jason-2 and Cryosat-2, particularly at low SWH. Furthermore, results show that the corrections applied to Jason-2 and to Cryosat-2, in the data base, are efficient, improving the global agreement between the three altimeters.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reliability has emerged as a critical design constraint especially in memories. Designers are going to great lengths to guarantee fault free operation of the underlying silicon by adopting redundancy-based techniques, which essentially try to detect and correct every single error. However, such techniques come at a cost of large area, power and performance overheads which making many researchers to doubt their efficiency especially for error resilient systems where 100% accuracy is not always required. In this paper, we present an alternative method focusing on the confinement of the resulting output error induced by any reliability issues. By focusing on memory faults, rather than correcting every single error the proposed method exploits the statistical characteristics of any target application and replaces any erroneous data with the best available estimate of that data. To realize the proposed method a RISC processor is augmented with custom instructions and special-purpose functional units. We apply the method on the proposed enhanced processor by studying the statistical characteristics of the various algorithms involved in a popular multimedia application. Our experimental results show that in contrast to state-of-the-art fault tolerance approaches, we are able to reduce runtime and area overhead by 71.3% and 83.3% respectively.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Over the past few years, the number of wireless networks users has been increasing. Until now, Radio-Frequency (RF) used to be the dominant technology. However, the electromagnetic spectrum in these region is being saturated, demanding for alternative wireless technologies. Recently, with the growing market of LED lighting, the Visible Light Communications has been drawing attentions from the research community. First, it is an eficient device for illumination. Second, because of its easy modulation and high bandwidth. Finally, it can combine illumination and communication in the same device, in other words, it allows to implement highly eficient wireless communication systems. One of the most important aspects in a communication system is its reliability when working in noisy channels. In these scenarios, the received data can be afected by errors. In order to proper system working, it is usually employed a Channel Encoder in the system. Its function is to code the data to be transmitted in order to increase system performance. It commonly uses ECC, which appends redundant information to the original data. At the receiver side, the redundant information is used to recover the erroneous data. This dissertation presents the implementation steps of a Channel Encoder for VLC. It was consider several techniques such as Reed-Solomon and Convolutional codes, Block and Convolutional Interleaving, CRC and Puncturing. A detailed analysis of each technique characteristics was made in order to choose the most appropriate ones. Simulink models were created in order to simulate how diferent codes behave in diferent scenarios. Later, the models were implemented in a FPGA and simulations were performed. Hardware co-simulations were also implemented to faster simulation results. At the end, diferent techniques were combined to create a complete Channel Encoder capable of detect and correct random and burst errors, due to the usage of a RS(255,213) code with a Block Interleaver. Furthermore, after the decoding process, the proposed system can identify uncorrectable errors in the decoded data due to the CRC-32 algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Joint generalized linear models and double generalized linear models (DGLMs) were designed to model outcomes for which the variability can be explained using factors and/or covariates. When such factors operate, the usual normal regression models, which inherently exhibit constant variance, will under-represent variation in the data and hence may lead to erroneous inferences. For count and proportion data, such noise factors can generate a so-called overdispersion effect, and the use of binomial and Poisson models underestimates the variability and, consequently, incorrectly indicate significant effects. In this manuscript, we propose a DGLM from a Bayesian perspective, focusing on the case of proportion data, where the overdispersion can be modeled using a random effect that depends on some noise factors. The posterior joint density function was sampled using Monte Carlo Markov Chain algorithms, allowing inferences over the model parameters. An application to a data set on apple tissue culture is presented, for which it is shown that the Bayesian approach is quite feasible, even when limited prior information is available, thereby generating valuable insight for the researcher about its experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Here we make an initial step toward the development of an ocean assimilation system that can constrain the modelled Atlantic Meridional Overturning Circulation (AMOC) to support climate predictions. A detailed comparison is presented of 1° and 1/4° resolution global model simulations with and without sequential data assimilation, to the observations and transport estimates from the RAPID mooring array across 26.5° N in the Atlantic. Comparisons of modelled water properties with the observations from the merged RAPID boundary arrays demonstrate the ability of in situ data assimilation to accurately constrain the east-west density gradient between these mooring arrays. However, the presence of an unconstrained "western boundary wedge" between Abaco Island and the RAPID mooring site WB2 (16 km offshore) leads to the intensification of an erroneous southwards flow in this region when in situ data are assimilated. The result is an overly intense southward upper mid-ocean transport (0–1100 m) as compared to the estimates derived from the RAPID array. Correction of upper layer zonal density gradients is found to compensate mostly for a weak subtropical gyre circulation in the free model run (i.e. with no assimilation). Despite the important changes to the density structure and transports in the upper layer imposed by the assimilation, very little change is found in the amplitude and sub-seasonal variability of the AMOC. This shows that assimilation of upper layer density information projects mainly on the gyre circulation with little effect on the AMOC at 26° N due to the absence of corrections to density gradients below 2000 m (the maximum depth of Argo). The sensitivity to initial conditions was explored through two additional experiments using a climatological initial condition. These experiments showed that the weak bias in gyre intensity in the control simulation (without data assimilation) develops over a period of about 6 months, but does so independently from the overturning, with no change to the AMOC. However, differences in the properties and volume transport of North Atlantic Deep Water (NADW) persisted throughout the 3 year simulations resulting in a difference of 3 Sv in AMOC intensity. The persistence of these dense water anomalies and their influence on the AMOC is promising for the development of decadal forecasting capabilities. The results suggest that the deeper waters must be accurately reproduced in order to constrain the AMOC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data assimilation methods which avoid the assumption of Gaussian error statistics are being developed for geoscience applications. We investigate how the relaxation of the Gaussian assumption affects the impact observations have within the assimilation process. The effect of non-Gaussian observation error (described by the likelihood) is compared to previously published work studying the effect of a non-Gaussian prior. The observation impact is measured in three ways: the sensitivity of the analysis to the observations, the mutual information, and the relative entropy. These three measures have all been studied in the case of Gaussian data assimilation and, in this case, have a known analytical form. It is shown that the analysis sensitivity can also be derived analytically when at least one of the prior or likelihood is Gaussian. This derivation shows an interesting asymmetry in the relationship between analysis sensitivity and analysis error covariance when the two different sources of non-Gaussian structure are considered (likelihood vs. prior). This is illustrated for a simple scalar case and used to infer the effect of the non-Gaussian structure on mutual information and relative entropy, which are more natural choices of metric in non-Gaussian data assimilation. It is concluded that approximating non-Gaussian error distributions as Gaussian can give significantly erroneous estimates of observation impact. The degree of the error depends not only on the nature of the non-Gaussian structure, but also on the metric used to measure the observation impact and the source of the non-Gaussian structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We use sunspot group observations from the Royal Greenwich Observatory (RGO) to investigate the effects of intercalibrating data from observers with different visual acuities. The tests are made by counting the number of groups RB above a variable cut-off threshold of observed total whole-spot area (uncorrected for foreshortening) to simulate what a lower acuity observer would have seen. The synthesised annual means of RB are then re-scaled to the full observed RGO group number RA using a variety of regression techniques. It is found that a very high correlation between RA and RB (rAB > 0.98) does not prevent large errors in the intercalibration (for example sunspot maximum values can be over 30 % too large even for such levels of rAB). In generating the backbone sunspot number (RBB), Svalgaard and Schatten (2015, this issue) force regression fits to pass through the scatter plot origin which generates unreliable fits (the residuals do not form a normal distribution) and causes sunspot cycle amplitudes to be exaggerated in the intercalibrated data. It is demonstrated that the use of Quantile-Quantile (“Q  Q”) plots to test for a normal distribution is a useful indicator of erroneous and misleading regression fits. Ordinary least squares linear fits, not forced to pass through the origin, are sometimes reliable (although the optimum method used is shown to be different when matching peak and average sunspot group numbers). However, other fits are only reliable if non-linear regression is used. From these results it is entirely possible that the inflation of solar cycle amplitudes in the backbone group sunspot number as one goes back in time, relative to related solar-terrestrial parameters, is entirely caused by the use of inappropriate and non-robust regression techniques to calibrate the sunspot data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

P>In the context of either Bayesian or classical sensitivity analyses of over-parametrized models for incomplete categorical data, it is well known that prior-dependence on posterior inferences of nonidentifiable parameters or that too parsimonious over-parametrized models may lead to erroneous conclusions. Nevertheless, some authors either pay no attention to which parameters are nonidentifiable or do not appropriately account for possible prior-dependence. We review the literature on this topic and consider simple examples to emphasize that in both inferential frameworks, the subjective components can influence results in nontrivial ways, irrespectively of the sample size. Specifically, we show that prior distributions commonly regarded as slightly informative or noninformative may actually be too informative for nonidentifiable parameters, and that the choice of over-parametrized models may drastically impact the results, suggesting that a careful examination of their effects should be considered before drawing conclusions.Resume Que ce soit dans un cadre Bayesien ou classique, il est bien connu que la surparametrisation, dans les modeles pour donnees categorielles incompletes, peut conduire a des conclusions erronees. Cependant, certains auteurs persistent a negliger les problemes lies a la presence de parametres non identifies. Nous passons en revue la litterature dans ce domaine, et considerons quelques exemples surparametres simples dans lesquels les elements subjectifs influencent de facon non negligeable les resultats, independamment de la taille des echantillons. Plus precisement, nous montrons comment des a priori consideres comme peu ou non-informatifs peuvent se reveler extremement informatifs en ce qui concerne les parametres non identifies, et que le recours a des modeles surparametres peut avoir sur les conclusions finales un impact considerable. Ceci suggere un examen tres attentif de l`impact potentiel des a priori.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aimed to assess measurements of temperature and relative humidity obtained with HOBO a data logger, under various conditions of exposure to solar radiation, comparing them with those obtained through the use of a temperature/relative humidity probe and a copper-constantan thermocouple psychrometer, which are considered the standards for obtaining such measurements. Data were collected over a 6-day period (from 25 March to 1 April, 2010), during which the equipment was monitored continuously and simultaneously. We employed the following combinations of equipment and conditions: a HOBO data logger in full sunlight; a HOBO data logger shielded within a white plastic cup with windows for air circulation; a HOBO data logger shielded within a gill-type shelter (multi-plate prototype plastic); a copper-constantan thermocouple psychrometer exposed to natural ventilation and protected from sunlight; and a temperature/relative humidity probe under a commercial, multi-plate radiation shield. Comparisons between the measurements obtained with the various devices were made on the basis of statistical indicators: linear regression, with coefficient of determination; index of agreement; maximum absolute error; and mean absolute error. The prototype multi-plate shelter (gill-type) used in order to protect the HOBO data logger was found to provide the best protection against the effects of solar radiation on measurements of temperature and relative humidity. The precision and accuracy of a device that measures temperature and relative humidity depend on an efficient shelter that minimizes the interference caused by solar radiation, thereby avoiding erroneous analysis of the data obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compensatory Health Beliefs (CHB) are a common strategy used to reduce the cognitive discomfort that arises from participating in recognizably unhealthy behaviors. The current research examines relationships between CHB and other cognitive variables. Data was collected in two phases, using survey methodology. Study 1 explored relationships between the use of CHB, impulsiveness, and coping styles. Study 2 expanded the inquiry by exploring relationships to health perception and knowledge. Results revealed that participants who scored high on overall CHB were more likely to: engage in maladaptive coping strategies (r = .47, p < .01) [including avoidant coping styles (r = .38, p < .01) and unhealthy coping styles (r = .47, p < .01)], score higher on measures of impulsivity (r = .43, p < .01), be well-informed about their general health (r =-.21, p < .05), eat fast food more often ( r = p < .05), and consider it safe to smoke more frequently (r = .18, p < .05). Participants with lower CHB scores considered themselves more well-informed about their general health (r = -.21, p < .05), including understanding the minimum recommended amounts of physical activity needed to maintain health (r = -.35, p < .01 ), and knowing the health risks of stress ( r = -.19, p < .05). In addition, maladaptive coping was positively correlated with lack of general health knowledge (r = -.22 p < .01), less understanding of the risks of stress and alcohol (r = .20, p < .05), less knowledge of the recommended daily amounts of physical activity needed for health (r = -.26, p < .01), less frequent exercise (r = -.26, p < .01 ), and generally more unhealthy daily habits (r = -.26, p < .01). These findings contribute to a new area of investigation and may be useful to those who want to motivate behavior change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The angle concept is a multifaceted concept having static and dynamic definitions. The static definition of the angle refers to “the space between two rays” or “the intersection of two rays at the same end point” (Mitchelmore & White, 1998), whereas the dynamic definition of the angle concept highlights that the size of angle is the amount of rotation in direction (Fyhn, 2006). Since both definitions represent two diverse situations and have unique limitations (Henderson & Taimina, 2005), students may hold misconceptions about the angle concept. In this regard, the aim of this research was to explore high achievers’ knowledge regarding the definition of the angle concept as well as to investigate their erroneous answers on the angle concept.

104 grade 6 students drawn from four well-established elementary schools of Yozgat, Turkey were participated in this research. All participants were selected via a purposive sampling method and their mathematics grades were 4 or 5 out of 5, and. Data were collected through four questions prepared by considering the learning competencies set out in the grade 6 curriculum in Turkey and the findings of previous studies whose purposes were to identify students’ misconceptions of the angle concept. The findings were analyzed by two researchers, and their inter-rater agreement was calculated as 0.91, or almost perfect. Thereafter, coding discrepancies were resolved, and consensus was established.

The angle concept is a multifaceted concept having static and dynamic definitions.The static definition of the angle refers to “the space between two rays” or“the intersection of two rays at the same end point” (Mitchelmore & White, 1998), whereas the dynamicdefinition of the angle concept highlights that the size of angle is the amountof rotation in direction (Fyhn, 2006). Since both definitionsrepresent two diverse situations and have unique limitations (Henderson & Taimina, 2005), students may holdmisconceptions about the angle concept. In this regard, the aim of thisresearch was to explore high achievers’ knowledge regarding the definition ofthe angle concept as well as to investigate their erroneous answers on theangle concept.

104grade 6 students drawn from four well-established elementary schools of Yozgat,Turkey were participated in this research. All participants were selected via a purposive sampling method and their mathematics grades were 4 or 5 out of 5,and. Data were collected through four questions prepared by considering the learning competencies set out in the grade 6 curriculum in Turkey and the findings of previous studies whose purposes were to identify students’ misconceptions of the angle concept. The findings were analyzed by two researchers, and their inter-rater agreement was calculated as 0.91, or almost perfect. Thereafter, coding discrepancies were resolved, and consensus was established.

In the first question, students were asked to answer a multiple choice questions consisting of two statics definitions and one dynamic definition of the angle concept. Only 38 of 104 students were able to recognize these three definitions. Likewise, Mitchelmore and White (1998) investigated that less than10% of grade 4 students knew the dynamic definition of the angle concept. Additionally,the purpose of the second question was to figure out how well students could recognize 0-degree angle. We found that 49 of 104 students were unable to recognize MXW as an angle. While 6 students indicated that the size of MXW is0, other 6 students revealed that the size of MXW is 360. Therefore, 12 of 104students correctly answered this questions. On the other hand, 28 of 104students recognized the MXW angle as 180-degree angle. This finding demonstrated that these students have difficulties in naming the angles.Moreover, the third question consisted of three concentric circles with center O and two radiuses of the outer circle, and the intersection of the radiuses with these circles were named. Then, students were asked to compare the size of AOB, GOD and EOF angles. Only 36 of 104 students answered correctly by indicating that all three angles are equal, whereas 68 of 104 students incorrectly responded this question by revealing AOB<GOD< EOF. These students erroneously thought the size of the angle is related to either the size of the arc marking the angle or the area between the arms of the angle and the arc marking angle. These two erroneous strategies for determining the size of angles have been found by a few studies (Clausen-May,2008; Devichi & Munier, 2013; Kim & Lee, 2014; Mithcelmore, 1998;Wilson & Adams, 1992). The last question, whose aim was to determine how well students can adapt theangle concept to real life, consisted of an observer and a barrier, and students were asked to color the hidden area behind the barrier. Only 2 of 104students correctly responded this question, whereas 19 of 104 students drew rays from the observer to both sides of the barrier, and colored the area covered by the rays, the observer and barrier. While 35 of 104 students just colored behind the barrier without using any strategies, 33 of 104 students constructed two perpendicular lines at the both end of the barrier, and colored behind the barrier. Similarly, Munier, Devinci and Merle (2008) found that this incorrect strategy was used by 27% of students.

Consequently, we found that although the participants in this study were high achievers, they still held several misconceptions on the angle concept and had difficulties in adapting the angle concept to real life.

Keywords: the angle concept;misconceptions; erroneous answers; high achievers

References

Clausen-May, T. (2008). AnotherAngle on Angles. Australian Primary Mathematics Classroom, 13(1),4–8.

Devichi, C., & Munier, V.(2013). About the concept of angle in elementary school: Misconceptions andteaching sequences. The Journal of Mathematical Behavior, 32(1),1–19. http://doi.org/10.1016/j.jmathb.2012.10.001

Fyhn, A. B. (2006). A climbinggirl’s reflections about angles. The Journal of Mathematical Behavior, 25(2),91–102. http://doi.org/10.1016/j.jmathb.2006.02.004

Henderson, D. W., & Taimina,D. (2005). Experiencing geometry: Euclidean and non-Euclidean with history(3rd ed.). New York, USA: Prentice Hall.

Kim, O.-K., & Lee, J. H.(2014). Representations of Angle and Lesson Organization in Korean and AmericanElementary Mathematics Curriculum Programs. KAERA Research Forum, 1(3),28–37.

Mitchelmore, M. C., & White,P. (1998). Development of angle concepts: A framework for research. MathematicsEducation Research Journal, 10(3), 4–27.

Mithcelmore, M. C. (1998). Youngstudents’ concepts of turning and angle. Cognition and Instruction, 16(3),265–284.

Munier, V., Devichi, C., &Merle, H. (2008). A Physical Situation as a Way to Teach Angle. TeachingChildren Mathematics, 14(7), 402–407.

Wilson, P. S., & Adams, V.M. (1992). A Dynamic Way to Teach Angle and Angle Measure. ArithmeticTeacher, 39(5), 6–13.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two-hybrid, proteomics and metabolomics datasets, but it is also extendable to other datasets. IIS is freely available online at: http://www.lge.ibi.unicamp.br/lnbio/IIS/.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The article seeks to investigate patterns of performance and relationships between grip strength, gait speed and self-rated health, and investigate the relationships between them, considering the variables of gender, age and family income. This was conducted in a probabilistic sample of community-dwelling elderly aged 65 and over, members of a population study on frailty. A total of 689 elderly people without cognitive deficit suggestive of dementia underwent tests of gait speed and grip strength. Comparisons between groups were based on low, medium and high speed and strength. Self-related health was assessed using a 5-point scale. The males and the younger elderly individuals scored significantly higher on grip strength and gait speed than the female and oldest did; the richest scored higher than the poorest on grip strength and gait speed; females and men aged over 80 had weaker grip strength and lower gait speed; slow gait speed and low income arose as risk factors for a worse health evaluation. Lower muscular strength affects the self-rated assessment of health because it results in a reduction in functional capacity, especially in the presence of poverty and a lack of compensatory factors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Obstructive sleep apnea syndrome has a high prevalence among adults. Cephalometric variables can be a valuable method for evaluating patients with this syndrome. To correlate cephalometric data with the apnea-hypopnea sleep index. We performed a retrospective and cross-sectional study that analyzed the cephalometric data of patients followed in the Sleep Disorders Outpatient Clinic of the Discipline of Otorhinolaryngology of a university hospital, from June 2007 to May 2012. Ninety-six patients were included, 45 men, and 51 women, with a mean age of 50.3 years. A total of 11 patients had snoring, 20 had mild apnea, 26 had moderate apnea, and 39 had severe apnea. The distance from the hyoid bone to the mandibular plane was the only variable that showed a statistically significant correlation with the apnea-hypopnea index. Cephalometric variables are useful tools for the understanding of obstructive sleep apnea syndrome. The distance from the hyoid bone to the mandibular plane showed a statistically significant correlation with the apnea-hypopnea index.