165 resultados para multiple data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Muscle physiologists often describe fatigue simply as a decline of muscle force and infer this causes an athlete to slow down. In contrast, exercise scientists describe fatigue during sport competition more holistically as an exercise-induced impairment of performance. The aim of this review is to reconcile the different views by evaluating the many performance symptoms/measures and mechanisms of fatigue. We describe how fatigue is assessed with muscle, exercise or competition performance measures. Muscle performance (single muscle test measures) declines due to peripheral fatigue (reduced muscle cell force) and/or central fatigue (reduced motor drive from the CNS). Peak muscle force seldom falls by >30% during sport but is often exacerbated during electrical stimulation and laboratory exercise tasks. Exercise performance (whole-body exercise test measures) reveals impaired physical/technical abilities and subjective fatigue sensations. Exercise intensity is initially sustained by recruitment of new motor units and help from synergistic muscles before it declines. Technique/motor skill execution deviates as exercise proceeds to maintain outcomes before they deteriorate, e.g. reduced accuracy or velocity. The sensation of fatigue incorporates an elevated rating of perceived exertion (RPE) during submaximal tasks, due to a combination of peripheral and higher CNS inputs. Competition performance (sport symptoms) is affected more by decision-making and psychological aspects, since there are opponents and a greater importance on the result. Laboratory based decision making is generally faster or unimpaired. Motivation, self-efficacy and anxiety can change during exercise to modify RPE and, hence, alter physical performance. Symptoms of fatigue during racing, team-game or racquet sports are largely anecdotal, but sometimes assessed with time-motion analysis. Fatigue during brief all-out racing is described biomechanically as a decline of peak velocity, along with altered kinematic components. Longer sport events involve pacing strategies, central and peripheral fatigue contributions and elevated RPE. During match play, the work rate can decline late in a match (or tournament) and/or transiently after intense exercise bursts. Repeated sprint ability, agility and leg strength become slightly impaired. Technique outcomes, such as velocity and accuracy for throwing, passing, hitting and kicking, can deteriorate. Physical and subjective changes are both less severe in real rather than simulated sport activities. Little objective evidence exists to support exercise-induced mental lapses during sport. A model depicting mind-body interactions during sport competition shows that the RPE centre-motor cortex-working muscle sequence drives overall performance levels and, hence, fatigue symptoms. The sporting outputs from this sequence can be modulated by interactions with muscle afferent and circulatory feedback, psychological and decision-making inputs. Importantly, compensatory processes exist at many levels to protect against performance decrements. Small changes of putative fatigue factors can also be protective. We show that individual fatigue factors including diminished carbohydrate availability, elevated serotonin, hypoxia, acidosis, hyperkalaemia, hyperthermia, dehydration and reactive oxygen species, each contribute to several fatigue symptoms. Thus, multiple symptoms of fatigue can occur simultaneously and the underlying mechanisms overlap and interact. Based on this understanding, we reinforce the proposal that fatigue is best described globally as an exercise-induced decline of performance as this is inclusive of all viewpoints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent studies on automatic new topic identification in Web search engine user sessions demonstrated that neural networks are successful in automatic new topic identification. However most of this work applied their new topic identification algorithms on data logs from a single search engine. In this study, we investigate whether the application of neural networks for automatic new topic identification are more successful on some search engines than others. Sample data logs from the Norwegian search engine FAST (currently owned by Overture) and Excite are used in this study. Findings of this study suggest that query logs with more topic shifts tend to provide more successful results on shift-based performance measures, whereas logs with more topic continuations tend to provide better results on continuation-based performance measures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diversity techniques have long been used to combat the channel fading in wireless communications systems. Recently cooperative communications has attracted lot of attention due to many benefits it offers. Thus cooperative routing protocols with diversity transmission can be developed to exploit the random nature of the wireless channels to improve the network efficiency by selecting multiple cooperative nodes to forward data. In this paper we analyze and evaluate the performance of a novel routing protocol with multiple cooperative nodes which share multiple channels. Multiple shared channels cooperative (MSCC) routing protocol achieves diversity advantage by using cooperative transmission. It unites clustering hierarchy with a bandwidth reuse scheme to mitigate the co-channel interference. Theoretical analysis of average packet reception rate and network throughput of the MSCC protocol are presented and compared with simulated results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixture models are a flexible tool for unsupervised clustering that have found popularity in a vast array of research areas. In studies of medicine, the use of mixtures holds the potential to greatly enhance our understanding of patient responses through the identification of clinically meaningful clusters that, given the complexity of many data sources, may otherwise by intangible. Furthermore, when developed in the Bayesian framework, mixture models provide a natural means for capturing and propagating uncertainty in different aspects of a clustering solution, arguably resulting in richer analyses of the population under study. This thesis aims to investigate the use of Bayesian mixture models in analysing varied and detailed sources of patient information collected in the study of complex disease. The first aim of this thesis is to showcase the flexibility of mixture models in modelling markedly different types of data. In particular, we examine three common variants on the mixture model, namely, finite mixtures, Dirichlet Process mixtures and hidden Markov models. Beyond the development and application of these models to different sources of data, this thesis also focuses on modelling different aspects relating to uncertainty in clustering. Examples of clustering uncertainty considered are uncertainty in a patient’s true cluster membership and accounting for uncertainty in the true number of clusters present. Finally, this thesis aims to address and propose solutions to the task of comparing clustering solutions, whether this be comparing patients or observations assigned to different subgroups or comparing clustering solutions over multiple datasets. To address these aims, we consider a case study in Parkinson’s disease (PD), a complex and commonly diagnosed neurodegenerative disorder. In particular, two commonly collected sources of patient information are considered. The first source of data are on symptoms associated with PD, recorded using the Unified Parkinson’s Disease Rating Scale (UPDRS) and constitutes the first half of this thesis. The second half of this thesis is dedicated to the analysis of microelectrode recordings collected during Deep Brain Stimulation (DBS), a popular palliative treatment for advanced PD. Analysis of this second source of data centers on the problems of unsupervised detection and sorting of action potentials or "spikes" in recordings of multiple cell activity, providing valuable information on real time neural activity in the brain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Twin studies offer the opportunity to determine the relative contribution of genes versus environment in traits of interest. Here, we investigate the extent to which variance in brain structure is reduced in monozygous twins with identical genetic make-up. We investigate whether using twins as compared to a control population reduces variability in a number of common magnetic resonance (MR) structural measures, and we investigate the location of areas under major genetic influences. This is fundamental to understanding the benefit of using twins in studies where structure is the phenotype of interest. Twenty-three pairs of healthy MZ twins were compared to matched control pairs. Volume, T2 and diffusion MR imaging were performed as well as spectroscopy (MRS). Images were compared using (i) global measures of standard deviation and effect size, (ii) voxel-based analysis of similarity and (iii) intra-pair correlation. Global measures indicated a consistent increase in structural similarity in twins. The voxel-based and correlation analyses indicated a widespread pattern of increased similarity in twin pairs, particularly in frontal and temporal regions. The areas of increased similarity were most widespread for the diffusion trace and least widespread for T2. MRS showed consistent reduction in metabolite variation that was significant in the temporal lobe N-acetylaspartate (NAA). This study has shown the distribution and magnitude of reduced variability in brain volume, diffusion, T2 and metabolites in twins. The data suggest that evaluation of twins discordant for disease is indeed a valid way to attribute genetic or environmental influences to observed abnormalities in patients since evidence is provided for the underlying assumption of decreased variability in twins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Typical reference year (TRY) weather data is often used to represent the long term weather pattern for building simulation and design. Through the analysis of ten year historical hourly weather data for seven Australian major capital cities using the frequencies procedure of descriptive statistics analysis (by SPSS software), this paper investigates: • the closeness of the typical reference year (TRY) weather data in representing the long term weather pattern; • the variations and common features that may exist between relatively hot and cold years. It is found that for the given set of input data, in comparison with the other weather elements, the discrepancy between TRY and multiple years is much smaller for the dry bulb temperature, relative humidity and global solar irradiance. The overall distribution patterns of key weather elements are also generally similar between the hot and cold years, but with some shift and/or small distortion. There is little common tendency of change between the hot and the cold years for different weather variables at different study locations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study shows an alternative solution to existing efforts at solving the problem of how to centrally manage and synchronise users’ Multiple Profiles (MP) across multiple discrete social networks. Most social network users hold more than one social network account and utilise them in different ways depending on the digital context (Iannella, 2009a). They may, for example, enjoy friendly chat on Facebook1, professional discussion on LinkedIn2, and health information exchange on PatientsLikeMe3 In this thesis the researcher proposes a framework for the management of a user’s multiple online social network profiles. A demonstrator, called Multiple Profile Manager (MPM), will be showcased to illustrate how effective the framework will be. The MPM will achieve the required profile management and synchronisation using a free, open, decentralized social networking platform (OSW) that was proposed by the Vodafone Group in 2010. The proposed MPM will enable a user to create and manage an integrated profile (IP) and share/synchronise this profile with all their social networks. The necessary protocols to support the prototype are also proposed by the researcher. The MPM protocol specification defines an Extensible Messaging and Presence Protocol (XMPP) extension for sharing vCard and social network accounts information between the MPM Server, MPM Client, and social network sites (SNSs). . Therefore many web users need to manage disparate profiles across many distributed online sources. Maintaining these profiles is cumbersome, time-consuming, inefficient, and may lead to lost opportunity. The writer of this thesis adopted a research approach and a number of use cases for the implementation of the project. The use cases were created to capture the functional requirements of the MPM and to describe the interactions between users and the MPM. In the research a development process was followed in establishing the prototype and related protocols. The use cases were subsequently used to illustrate the prototype via the screenshots taken of the MPM client interfaces. The use cases also played a role in evaluating the outcomes of the research such as the framework, prototype, and the related protocols. An innovative application of this project is in the area of public health informatics. The researcher utilised the prototype to examine how the framework might benefit patients and physicians. The framework can greatly enhance health information management for patients and more importantly offer a more comprehensive personal health overview of patients to physicians. This will give a more complete picture of the patient’s background than is currently available and will prove helpful in providing the right treatment. The MPM prototype and related protocols have a high application value as they can be integrated into the real OSW platform and so serve users in the modern digital world. They also provide online users with a real platform for centrally storing their complete profile data, efficiently managing their personal information, and moreover, synchronising the overall complete profile with each of their discrete profiles stored in their different social network sites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ubiquitylation is a necessary step in the endocytosis and lysosomal trafficking of many plasma membrane proteins and can also influence protein trafficking in the biosynthetic pathway. Although a molecular understanding of ubiquitylation in these processes is beginning to emerge, very little is known about the role deubiquitylation may play. Fat Facets in mouse (FAM) is substrate-specific deubiquitylating enzyme highly expressed in epithelia where it interacts with its substrate, β-catenin. Here we show, in the polarized intestinal epithelial cell line T84, FAM localized to multiple points of protein trafficking. FAM interacted with β-catenin and E-cadherin in T84 cells but only in subconfluent cultures. FAM extensively colocalized with β-catenin in cytoplasmic puncta but not at sites of cell-cell contact as well as immunoprecipitating with β-catenin and E-cadherin from a higher molecular weight complex (~500 kDa). At confluence FAM neither colocalized with, nor immunoprecipitated, β-catenin or E-cadherin, which were predominantly in a larger molecular weight complex (~2 MDa) at the cell surface. Overexpression of FAM in MCF-7 epithelial cells resulted in increased β-catenin levels, which localized to the plasma membrane. Expression of E-cadherin in L-cell fibroblasts resulted in the relocalization of FAM from the Golgi to cytoplasmic puncta. These data strongly suggest that FAM associates with E-cadherin and β-catenin during trafficking to the plasma membrane.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sequence data often have competing signals that are detected by network programs or Lento plots. Such data can be formed by generating sequences on more than one tree, and combining the results, a mixture model. We report that with such mixture models, the estimates of edge (branch) lengths from maximum likelihood (ML) methods that assume a single tree are biased. Based on the observed number of competing signals in real data, such a bias of ML is expected to occur frequently. Because network methods can recover competing signals more accurately, there is a need for ML methods allowing a network. A fundamental problem is that mixture models can have more parameters than can be recovered from the data, so that some mixtures are not, in principle, identifiable. We recommend that network programs be incorporated into best practice analysis, along with ML and Bayesian trees.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The discovery of protein variation is an important strategy in disease diagnosis within the biological sciences. The current benchmark for elucidating information from multiple biological variables is the so called “omics” disciplines of the biological sciences. Such variability is uncovered by implementation of multivariable data mining techniques which come under two primary categories, machine learning strategies and statistical based approaches. Typically proteomic studies can produce hundreds or thousands of variables, p, per observation, n, depending on the analytical platform or method employed to generate the data. Many classification methods are limited by an n≪p constraint, and as such, require pre-treatment to reduce the dimensionality prior to classification. Recently machine learning techniques have gained popularity in the field for their ability to successfully classify unknown samples. One limitation of such methods is the lack of a functional model allowing meaningful interpretation of results in terms of the features used for classification. This is a problem that might be solved using a statistical model-based approach where not only is the importance of the individual protein explicit, they are combined into a readily interpretable classification rule without relying on a black box approach. Here we incorporate statistical dimension reduction techniques Partial Least Squares (PLS) and Principal Components Analysis (PCA) followed by both statistical and machine learning classification methods, and compared them to a popular machine learning technique, Support Vector Machines (SVM). Both PLS and SVM demonstrate strong utility for proteomic classification problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Audio-visualspeechrecognition, or the combination of visual lip-reading with traditional acoustic speechrecognition, has been previously shown to provide a considerable improvement over acoustic-only approaches in noisy environments, such as that present in an automotive cabin. The research presented in this paper will extend upon the established audio-visualspeechrecognition literature to show that further improvements in speechrecognition accuracy can be obtained when multiple frontal or near-frontal views of a speaker's face are available. A series of visualspeechrecognition experiments using a four-stream visual synchronous hidden Markov model (SHMM) are conducted on the four-camera AVICAR automotiveaudio-visualspeech database. We study the relative contribution between the side and central orientated cameras in improving visualspeechrecognition accuracy. Finally combination of the four visual streams with a single audio stream in a five-stream SHMM demonstrates a relative improvement of over 56% in word recognition accuracy when compared to the acoustic-only approach in the noisiest conditions of the AVICAR database.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software as a Service (SaaS) in Cloud is getting more and more significant among software users and providers recently. A SaaS that is delivered as composite application has many benefits including reduced delivery costs, flexible offers of the SaaS functions and decreased subscription cost for users. However, this approach has introduced a new problem in managing the resources allocated to the composite SaaS. The resource allocation that has been done at the initial stage may be overloaded or wasted due to the dynamic environment of a Cloud. A typical data center resource management usually triggers a placement reconfiguration for the SaaS in order to maintain its performance as well as to minimize the resource used. Existing approaches for this problem often ignore the underlying dependencies between SaaS components. In addition, the reconfiguration also has to comply with SaaS constraints in terms of its resource requirements, placement requirement as well as its SLA. To tackle the problem, this paper proposes a penalty-based Grouping Genetic Algorithm for multiple composite SaaS components clustering in Cloud. The main objective is to minimize the resource used by the SaaS by clustering its component without violating any constraint. Experimental results demonstrate the feasibility and the scalability of the proposed algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A significant issue encountered when fusing data received from multiple sensors is the accuracy of the timestamp associated with each piece of data. This is particularly important in applications such as Simultaneous Localisation and Mapping (SLAM) where vehicle velocity forms an important part of the mapping algorithms; on fastmoving vehicles, even millisecond inconsistencies in data timestamping can produce errors which need to be compensated for. The timestamping problem is compounded in a robot swarm environment due to the use of non-deterministic readily-available hardware (such as 802.11-based wireless) and inaccurate clock synchronisation protocols (such as Network Time Protocol (NTP)). As a result, the synchronisation of the clocks between robots can be out by tens-to-hundreds of milliseconds making correlation of data difficult and preventing the possibility of the units performing synchronised actions such as triggering cameras or intricate swarm manoeuvres. In this thesis, a complete data fusion unit is designed, implemented and tested. The unit, named BabelFuse, is able to accept sensor data from a number of low-speed communication buses (such as RS232, RS485 and CAN Bus) and also timestamp events that occur on General Purpose Input/Output (GPIO) pins referencing a submillisecondaccurate wirelessly-distributed "global" clock signal. In addition to its timestamping capabilities, it can also be used to trigger an attached camera at a predefined start time and frame rate. This functionality enables the creation of a wirelessly-synchronised distributed image acquisition system over a large geographic area; a real world application for this functionality is the creation of a platform to facilitate wirelessly-distributed 3D stereoscopic vision. A ‘best-practice’ design methodology is adopted within the project to ensure the final system operates according to its requirements. Initially, requirements are generated from which a high-level architecture is distilled. This architecture is then converted into a hardware specification and low-level design, which is then manufactured. The manufactured hardware is then verified to ensure it operates as designed and firmware and Linux Operating System (OS) drivers are written to provide the features and connectivity required of the system. Finally, integration testing is performed to ensure the unit functions as per its requirements. The BabelFuse System comprises of a single Grand Master unit which is responsible for maintaining the absolute value of the "global" clock. Slave nodes then determine their local clock o.set from that of the Grand Master via synchronisation events which occur multiple times per-second. The mechanism used for synchronising the clocks between the boards wirelessly makes use of specific hardware and a firmware protocol based on elements of the IEEE-1588 Precision Time Protocol (PTP). With the key requirement of the system being submillisecond-accurate clock synchronisation (as a basis for timestamping and camera triggering), automated testing is carried out to monitor the o.sets between each Slave and the Grand Master over time. A common strobe pulse is also sent to each unit for timestamping; the correlation between the timestamps of the di.erent units is used to validate the clock o.set results. Analysis of the automated test results show that the BabelFuse units are almost threemagnitudes more accurate than their requirement; clocks of the Slave and Grand Master units do not di.er by more than three microseconds over a running time of six hours and the mean clock o.set of Slaves to the Grand Master is less-than one microsecond. The common strobe pulse used to verify the clock o.set data yields a positive result with a maximum variation between units of less-than two microseconds and a mean value of less-than one microsecond. The camera triggering functionality is verified by connecting the trigger pulse output of each board to a four-channel digital oscilloscope and setting each unit to output a 100Hz periodic pulse with a common start time. The resulting waveform shows a maximum variation between the rising-edges of the pulses of approximately 39¥ìs, well below its target of 1ms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reliable ambiguity resolution (AR) is essential to Real-Time Kinematic (RTK) positioning and its applications, since incorrect ambiguity fixing can lead to largely biased positioning solutions. A partial ambiguity fixing technique is developed to improve the reliability of AR, involving partial ambiguity decorrelation (PAD) and partial ambiguity resolution (PAR). Decorrelation transformation could substantially amplify the biases in the phase measurements. The purpose of PAD is to find the optimum trade-off between decorrelation and worst-case bias amplification. The concept of PAR refers to the case where only a subset of the ambiguities can be fixed correctly to their integers in the integer least-squares (ILS) estimation system at high success rates. As a result, RTK solutions can be derived from these integer-fixed phase measurements. This is meaningful provided that the number of reliably resolved phase measurements is sufficiently large for least-square estimation of RTK solutions as well. Considering the GPS constellation alone, partially fixed measurements are often insufficient for positioning. The AR reliability is usually characterised by the AR success rate. In this contribution an AR validation decision matrix is firstly introduced to understand the impact of success rate. Moreover the AR risk probability is included into a more complete evaluation of the AR reliability. We use 16 ambiguity variance-covariance matrices with different levels of success rate to analyse the relation between success rate and AR risk probability. Next, the paper examines during the PAD process, how a bias in one measurement is propagated and amplified onto many others, leading to more than one wrong integer and to affect the success probability. Furthermore, the paper proposes a partial ambiguity fixing procedure with a predefined success rate criterion and ratio-test in the ambiguity validation process. In this paper, the Galileo constellation data is tested with simulated observations. Numerical results from our experiment clearly demonstrate that only when the computed success rate is very high, the AR validation can provide decisions about the correctness of AR which are close to real world, with both low AR risk and false alarm probabilities. The results also indicate that the PAR procedure can automatically chose adequate number of ambiguities to fix at given high-success rate from the multiple constellations instead of fixing all the ambiguities. This is a benefit that multiple GNSS constellations can offer.