948 resultados para Lysine content analyses
Resumo:
Background and purpose There are no published studies on the parameterisation and reliability of the single-leg stance (SLS) test with inertial sensors in stroke patients. Purpose: to analyse the reliability (intra-observer/inter-observer) and sensitivity of inertial sensors used for the SLS test in stroke patients. Secondary objective: to compare the records of the two inertial sensors (trunk and lumbar) to detect any significant differences in the kinematic data obtained in the SLS test. Methods Design: cross-sectional study. While performing the SLS test, two inertial sensors were placed at lumbar (L5-S1) and trunk regions (T7–T8). Setting: Laboratory of Biomechanics (Health Science Faculty - University of Málaga). Participants: Four chronic stroke survivors (over 65 yrs old). Measurement: displacement and velocity, Rotation (X-axis), Flexion/Extension (Y-axis), Inclination (Z-axis); Resultant displacement and velocity (V): RV=(Vx2+Vy2+Vz2)−−−−−−−−−−−−−−−−−√ Along with SLS kinematic variables, descriptive analyses, differences between sensors locations and intra-observer and inter-observer reliability were also calculated. Results Differences between the sensors were significant only for left inclination velocity (p = 0.036) and extension displacement in the non-affected leg with eyes open (p = 0.038). Intra-observer reliability of the trunk sensor ranged from 0.889-0.921 for the displacement and 0.849-0.892 for velocity. Intra-observer reliability of the lumbar sensor was between 0.896-0.949 for the displacement and 0.873-0.894 for velocity. Inter-observer reliability of the trunk sensor was between 0.878-0.917 for the displacement and 0.847-0.884 for velocity. Inter-observer reliability of the lumbar sensor ranged from 0.870-0.940 for the displacement and 0.863-0.884 for velocity. Conclusion There were no significant differences between the kinematic records made by an inertial sensor during the development of the SLS testing between two inertial sensors placed in the lumbar and thoracic regions. In addition, inertial sensors. Have the potential to be reliable, valid and sensitive instruments for kinematic measurements during SLS testing but further research is needed.
Resumo:
The Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Consortium is a collaborative network of researchers working together on a range of large-scale studies that integrate data from 70 institutions worldwide. Organized into Working Groups that tackle questions in neuroscience, genetics, and medicine, ENIGMA studies have analyzed neuroimaging data from over 12,826 subjects. In addition, data from 12,171 individuals were provided by the CHARGE consortium for replication of findings, in a total of 24,997 subjects. By meta-analyzing results from many sites, ENIGMA has detected factors that affect the brain that no individual site could detect on its own, and that require larger numbers of subjects than any individual neuroimaging study has currently collected. ENIGMA's first project was a genome-wide association study identifying common variants in the genome associated with hippocampal volume or intracranial volume. Continuing work is exploring genetic associations with subcortical volumes (ENIGMA2) and white matter microstructure (ENIGMA-DTI). Working groups also focus on understanding how schizophrenia, bipolar illness, major depression and attention deficit/hyperactivity disorder (ADHD) affect the brain. We review the current progress of the ENIGMA Consortium, along with challenges and unexpected discoveries made on the way.
Resumo:
Currently we are facing an overburdening growth of the number of reliable information sources on the Internet. The quantity of information available to everyone via Internet is dramatically growing each year [15]. At the same time, temporal and cognitive resources of human users are not changing, therefore causing a phenomenon of information overload. World Wide Web is one of the main sources of information for decision makers (reference to my research). However our studies show that, at least in Poland, the decision makers see some important problems when turning to Internet as a source of decision information. One of the most common obstacles raised is distribution of relevant information among many sources, and therefore need to visit different Web sources in order to collect all important content and analyze it. A few research groups have recently turned to the problem of information extraction from the Web [13]. The most effort so far has been directed toward collecting data from dispersed databases accessible via web pages (related to as data extraction or information extraction from the Web) and towards understanding natural language texts by means of fact, entity, and association recognition (related to as information extraction). Data extraction efforts show some interesting results, however proper integration of web databases is still beyond us. Information extraction field has been recently very successful in retrieving information from natural language texts, however it is still lacking abilities to understand more complex information, requiring use of common sense knowledge, discourse analysis and disambiguation techniques.