913 resultados para synchronous HMM


Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is well established that the development of insulin resistance shows a temporal sequence in different organs and tissues. Moreover, considering that the main aspect of insulin resistance in liver is a process of glucose overproduction from gluconeogenesis, we investigated if this metabolic change also shows temporal sequence. For this purpose, a well-established experimental model of insulin resistance induced by high-fat diet (HFD) was used. The mice received HFD (HFD group) or standard diet (COG group) for 1, 7, 14 or 56?days. The HFD group showed increased (P?<?0.05 versus COG) epididymal, retroperitoneal and inguinal fat weight from days 1 to 56. In agreement with these results, the HFD group also showed higher body weight (P?<?0.05 versus COG) from days 7 to 56. Moreover, the changes induced by HFD on liver gluconeogenesis were progressive because the increment (P?<?0.05 versus COG) in glucose production from l-lactate, glycerol, l-alanine and l-glutamine occurred 7, 14, 56 and 56 days after the introduction of the HFD schedule, respectively. Furthermore, glycaemia and cholesterolemia increased (P?<?0.05 versus COG) 14?days after starting the HFD schedule. Taken together, the results suggest that the intensification of liver gluconeogenesis induced by an HFD is not a synchronous all-or-nothing process but is specific for each gluconeogenic substrate and is integrated in a temporal manner with the progressive augmentation of fasting glycaemia. Copyright (c) 2012 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has been revealed that the network of excitable neurons via attractive coupling can generate spikes under stimuli of subthreshold signals with disordered phases. In this paper, we explore the firing activity induced by phase disorder in excitable neuronal networks consisting of both attractive and repulsive coupling. By increasing the fraction of repulsive coupling, we find that, in the weak coupling strength case, the firing threshold of phase disorder is increased and the system response to subthreshold signals is decreased, indicating that the effect of inducing neuron firing by phase disorder is weakened with repulsive coupling. Interestingly, in the large coupling strength case, we see an opposite situation, where the coupled neurons show a rather large response to the subthreshold signals even with small phase disorder. The latter case implies that the effect of phase disorder is enhanced by repulsive coupling. A system of two-coupled excitable neurons is used to explain the role of repulsive coupling on phase-disorder-induced firing activity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report the discovery by the CoRoT space mission of a new giant planet, CoRoT-20b. The planet has a mass of 4.24 +/- 0.23 M-Jup and a radius of 0.84 +/- 0.04 R-Jup. With a mean density of 8.87 +/- 1.10 g cm(-3), it is among the most compact planets known so far. Evolutionary models for the planet suggest a mass of heavy elements of the order of 800 M-circle plus if embedded in a central core, requiring a revision either of the planet formation models or both planet evolution and structure models. We note however that smaller amounts of heavy elements are expected by more realistic models in which they are mixed throughout the envelope. The planet orbits a G-type star with an orbital period of 9.24 days and an eccentricity of 0.56. The star's projected rotational velocity is v sin i = 4.5 +/- 1.0 km s(-1), corresponding to a spin period of 11.5 +/- 3.1 days if its axis of rotation is perpendicular to the orbital plane. In the framework of Darwinian theories and neglecting stellar magnetic breaking, we calculate the tidal evolution of the system and show that CoRoT-20b is presently one of the very few Darwin-stable planets that is evolving toward a triple synchronous state with equality of the orbital, planetary and stellar spin periods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Background A large number of probabilistic models used in sequence analysis assign non-zero probability values to most input sequences. To decide when a given probability is sufficient the most common way is bayesian binary classification, where the probability of the model characterizing the sequence family of interest is compared to that of an alternative probability model. We can use as alternative model a null model. This is the scoring technique used by sequence analysis tools such as HMMER, SAM and INFERNAL. The most prevalent null models are position-independent residue distributions that include: the uniform distribution, genomic distribution, family-specific distribution and the target sequence distribution. This paper presents a study to evaluate the impact of the choice of a null model in the final result of classifications. In particular, we are interested in minimizing the number of false predictions in a classification. This is a crucial issue to reduce costs of biological validation. Results For all the tests, the target null model presented the lowest number of false positives, when using random sequences as a test. The study was performed in DNA sequences using GC content as the measure of content bias, but the results should be valid also for protein sequences. To broaden the application of the results, the study was performed using randomly generated sequences. Previous studies were performed on aminoacid sequences, using only one probabilistic model (HMM) and on a specific benchmark, and lack more general conclusions about the performance of null models. Finally, a benchmark test with P. falciparum confirmed these results. Conclusions Of the evaluated models the best suited for classification are the uniform model and the target model. However, the use of the uniform model presents a GC bias that can cause more false positives for candidate sequences with extreme compositional bias, a characteristic not described in previous studies. In these cases the target model is more dependable for biological validation due to its higher specificity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report describes a rare case of coexistence of benign phyllodes tumor, which measured 9 cm in the right breast, and invasive ductal carcinoma of 6 cm in the left breast, synchronous and independent, in a 66-year-old patient. The patient underwent a bilateral mastectomy due to the size of both lesions. Such situations are rare and usually refer to the occurrence of ductal or lobular carcinoma in situ when associated with malignant phyllodes tumors, and more often in ipsilateral breast or intra-lesional.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este artigo propõe uma nova metodologia prática e robusta para estimar parâmetros de geradores síncronos. A metodologia proposta utiliza a análise de sensibilidade de trajetória combinada com uma nova abordagem, denominada 'abordagem de minimização' para estimação de parâmetros de sistemas dinâmicos não-lineares modelados por conjuntos de equações algébricos diferenciais. Uma escolha adequada de entradas e saídas permite a divisão da estimação de parâmetros do gerador na estimação independente dos parâmetros elétricos e mecânicos. A metodologia é robusta com relação às condições iniciais dos parâmetros, não requer execução de testes especiais e utiliza apenas medidas de perturbações de fácil obtenção (correntes e tensões trifásicas, tensão de campo e velocidade do rotor) coletadas do sistema de energia elétrica sem desconectar o gerador da rede.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neoproterozoic geologic and geotectonic processes were of utmost importance in forming and structuring the basement framework of the South-American platform. Two large domains with distinct evolutionary histories are identified with respect to the Neoproterozoic era: the northwest-west (Amazonian craton and surroundings) and the central-southeast (the extra-Amazonian domain). In the first domain, Neoproterozoic events occurred only locally and were of secondary significance, and the geologic events, processes, and structures of the pre-Neoproterozoic (and syn-Brasiliano) cratonic block were much more influential. In the second, the extra-Amazonian domain, the final evolution, structures and forms are assigned to events related to the development of a complex net of Neoproterozoic mobile belts. These in turn resulted in strong reworking of the older pre-Neoproterozoic basement. In this domain, four distinct structural provinces circumscribe or are separated by relatively small pre- Neoproterozoic cratonic nuclei, namely the Pampean, Tocantins, Borborema and Mantiqueira provinces. These extra-Amazonian provinces were formed by a complex framework of orogenic branching systems following a diversified post-Mesoproterozoic paleogeographic scenario. This scenario included many types of basement inliers as well as a diversified organization of accretionary and collisional orogens. The basement inliers date from the Archean toMesoproterozoic periods and are different in nature. The escape tectonics that operated during the final consolidation stages of the provinces were important to and responsible for the final forms currently observed. These latest events, which occurred from the Late Ediacaran to the Early Ordovician, present serious obstacles to paleogeographic reconstructions. Two groups of orogenic collage systems are identified. The older system from the Tonian (>850 Ma) period is of restricted occurrence and is not fully understood due to strong reworking subsequent to Tonian times. The second group of orogenies is more extensive and more important. Its development began with diachronic taphrogenic processes in the Early Cryogenian period (ca. 850e750 Ma) and preceded a complex scenario of continental, transitional and oceanic basins. Subsequent orogenies (post 800 Ma) were also created by diachronic processes that ended in the Early Ordovician. More than one orogeny (plate interaction) can be identified either in space or in time in every province. The orogenic processes were not necessarily synchronous in different parts of the orogenic system, even within the same province. This particular group of orogenic collage events is known as the “Brasiliano”. All of the structural provinces of the extra-Amazonian domain exhibit final events that are marked by extrusion processes, are represented by long lineaments, and are fundamental to unraveling the structural history of the Phanerozoic sedimentary basins.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Programa de doctorado: Cibernética y Telecomunicación

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The recent trend in Web services is fostering a computing scenario where loosely coupled parties interact in a distributed and dynamic environment. Such interactions are sequences of xml messages and in order to assemble parties – either statically or dynamically – it is important to verify that the “contracts” of the parties are “compatible”. The Web Service Description Language (wsdl) is a standard used for describing one-way (asynchronous) and request/response (synchronous) interactions. Web Service Conversation Language extends wscl contracts by allowing the description of arbitrary, possibly cyclic sequences of exchanged messages between communicating parties. Unfortunately, neither wsdl nor wscl can effectively define a notion of compatibility, for the very simple reason that they do not provide any formal characterization of their contract languages. We define two contract languages for Web services. The first one is a data contract language and allow us to describe a Web service in terms of messages (xml documents) that can be sent or received. The second one is a behavioral contract language and allow us to give an abstract definition of the Web service conversation protocol. Both these languages are equipped with a sort of “sub-typing” relation and, therefore, they are suitable to be used for querying Web services repositories. In particular a query for a service compatible with a given contract may safely return services with “greater” contract.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Machines with moving parts give rise to vibrations and consequently noise. The setting up and the status of each machine yield to a peculiar vibration signature. Therefore, a change in the vibration signature, due to a change in the machine state, can be used to detect incipient defects before they become critical. This is the goal of condition monitoring, in which the informations obtained from a machine signature are used in order to detect faults at an early stage. There are a large number of signal processing techniques that can be used in order to extract interesting information from a measured vibration signal. This study seeks to detect rotating machine defects using a range of techniques including synchronous time averaging, Hilbert transform-based demodulation, continuous wavelet transform, Wigner-Ville distribution and spectral correlation density function. The detection and the diagnostic capability of these techniques are discussed and compared on the basis of experimental results concerning gear tooth faults, i.e. fatigue crack at the tooth root and tooth spalls of different sizes, as well as assembly faults in diesel engine. Moreover, the sensitivity to fault severity is assessed by the application of these signal processing techniques to gear tooth faults of different sizes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this thesis is to go through different approaches for proving expressiveness properties in several concurrent languages. We analyse four different calculi exploiting for each one a different technique. We begin with the analysis of a synchronous language, we explore the expressiveness of a fragment of CCS! (a variant of Milner's CCS where replication is considered instead of recursion) w.r.t. the existence of faithful encodings (i.e. encodings that respect the behaviour of the encoded model without introducing unnecessary computations) of models of computability strictly less expressive than Turing Machines. Namely, grammars of types 1,2 and 3 in the Chomsky Hierarchy. We then move to asynchronous languages and we study full abstraction for two Linda-like languages. Linda can be considered as the asynchronous version of CCS plus a shared memory (a multiset of elements) that is used for storing messages. After having defined a denotational semantics based on traces, we obtain fully abstract semantics for both languages by using suitable abstractions in order to identify different traces which do not correspond to different behaviours. Since the ability of one of the two variants considered of recognising multiple occurrences of messages in the store (which accounts for an increase of expressiveness) reflects in a less complex abstraction, we then study other languages where multiplicity plays a fundamental role. We consider the language CHR (Constraint Handling Rules) a language which uses multi-headed (guarded) rules. We prove that multiple heads augment the expressive power of the language. Indeed we show that if we restrict to rules where the head contains at most n atoms we could generate a hierarchy of languages with increasing expressiveness (i.e. the CHR language allowing at most n atoms in the heads is more expressive than the language allowing at most m atoms, with m

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Human reactions to vibration have been extensively investigated in the past. Vibration, as well as whole-body vibration (WBV), has been commonly considered as an occupational hazard for its detrimental effects on human condition and comfort. Although long term exposure to vibrations may produce undesirable side-effects, a great part of the literature is dedicated to the positive effects of WBV when used as method for muscular stimulation and as an exercise intervention. Whole body vibration training (WBVT) aims to mechanically activate muscles by eliciting neuromuscular activity (muscle reflexes) via the use of vibrations delivered to the whole body. The most mentioned mechanism to explain the neuromuscular outcomes of vibration is the elicited neuromuscular activation. Local tendon vibrations induce activity of the muscle spindle Ia fibers, mediated by monosynaptic and polysynaptic pathways: a reflex muscle contraction known as the Tonic Vibration Reflex (TVR) arises in response to such vibratory stimulus. In WBVT mechanical vibrations, in a range from 10 to 80 Hz and peak to peak displacements from 1 to 10 mm, are usually transmitted to the patient body by the use of oscillating platforms. Vibrations are then transferred from the platform to a specific muscle group through the subject body. To customize WBV treatments, surface electromyography (SEMG) signals are often used to reveal the best stimulation frequency for each subject. Use of SEMG concise parameters, such as root mean square values of the recordings, is also a common practice; frequently a preliminary session can take place in order to discover the more appropriate stimulation frequency. Soft tissues act as wobbling masses vibrating in a damped manner in response to mechanical excitation; Muscle Tuning hypothesis suggest that neuromuscular system works to damp the soft tissue oscillation that occurs in response to vibrations; muscles alters their activity to dampen the vibrations, preventing any resonance phenomenon. Muscle response to vibration is however a complex phenomenon as it depends on different parameters, like muscle-tension, muscle or segment-stiffness, amplitude and frequency of the mechanical vibration. Additionally, while in the TVR study the applied vibratory stimulus and the muscle conditions are completely characterised (a known vibration source is applied directly to a stretched/shortened muscle or tendon), in WBV study only the stimulus applied to a distal part of the body is known. Moreover, mechanical response changes in relation to the posture. The transmissibility of vibratory stimulus along the body segment strongly depends on the position held by the subject. The aim of this work was the investigation on the effects that the use of vibrations, in particular the effects of whole body vibrations, may have on muscular activity. A new approach to discover the more appropriate stimulus frequency, by the use of accelerometers, was also explored. Different subjects, not affected by any known neurological or musculoskeletal disorders, were voluntarily involved in the study and gave their informed, written consent to participate. The device used to deliver vibration to the subjects was a vibrating platform. Vibrations impressed by the platform were exclusively vertical; platform displacement was sinusoidal with an intensity (peak-to-peak displacement) set to 1.2 mm and with a frequency ranging from 10 to 80 Hz. All the subjects familiarized with the device and the proper positioning. Two different posture were explored in this study: position 1 - hack squat; position 2 - subject standing on toes with heels raised. SEMG signals from the Rectus Femoris (RF), Vastus Lateralis (VL) and Vastus medialis (VM) were recorded. SEMG signals were amplified using a multi-channel, isolated biomedical signal amplifier The gain was set to 1000 V/V and a band pass filter (-3dB frequency 10 - 500 Hz) was applied; no notch filters were used to suppress line interference. Tiny and lightweight (less than 10 g) three-axial MEMS accelerometers (Freescale semiconductors) were used to measure accelerations of onto patient’s skin, at EMG electrodes level. Accelerations signals provided information related to individuals’ RF, Biceps Femoris (BF) and Gastrocnemius Lateralis (GL) muscle belly oscillation; they were pre-processed in order to exclude influence of gravity. As demonstrated by our results, vibrations generate peculiar, not negligible motion artifact on skin electrodes. Artifact amplitude is generally unpredictable; it appeared in all the quadriceps muscles analysed, but in different amounts. Artifact harmonics extend throughout the EMG spectrum, making classic high-pass filters ineffective; however, their contribution was easy to filter out from the raw EMG signal with a series of sharp notch filters centred at the vibration frequency and its superior harmonics (1.5 Hz wide). However, use of these simple filters prevents the revelation of EMG power potential variation in the mentioned filtered bands. Moreover our experience suggests that the possibility of reducing motion artefact, by using particular electrodes and by accurately preparing the subject’s skin, is not easily viable; even though some small improvements were obtained, it was not possible to substantially decrease the artifact. Anyway, getting rid of those artifacts lead to some true EMG signal loss. Nevertheless, our preliminary results suggest that the use of notch filters at vibration frequency and its harmonics is suitable for motion artifacts filtering. In RF SEMG recordings during vibratory stimulation only a little EMG power increment should be contained in the mentioned filtered bands due to synchronous electromyographic activity of the muscle. Moreover, it is better to remove the artifact that, in our experience, was found to be more than 40% of the total signal power. In summary, many variables have to be taken into account: in addition to amplitude, frequency and duration of vibration treatment, other fundamental variables were found to be subject anatomy, individual physiological condition and subject’s positioning on the platform. Studies on WBV treatments that include surface EMG analysis to asses muscular activity during vibratory stimulation should take into account the presence of motion artifacts. Appropriate filtering of artifacts, to reveal the actual effect on muscle contraction elicited by vibration stimulus, is mandatory. However as a result of our preliminary study, a simple multi-band notch filtering may help to reduce randomness of the results. Muscle tuning hypothesis seemed to be confirmed. Our results suggested that the effects of WBV are linked to the actual muscle motion (displacement). The greater was the muscle belly displacement the higher was found the muscle activity. The maximum muscle activity has been found in correspondence with the local mechanical resonance, suggesting a more effective stimulation at the specific system resonance frequency. Holding the hypothesis that muscle activation is proportional to muscle displacement, treatment optimization could be obtained by simply monitoring local acceleration (resonance). However, our study revealed some short term effects of vibratory stimulus; prolonged studies should be assembled in order to consider the long term effectiveness of these results. Since local stimulus depends on the kinematic chain involved, WBV muscle stimulation has to take into account the transmissibility of the stimulus along the body segment in order to ensure that vibratory stimulation effectively reaches the target muscle. Combination of local resonance and muscle response should also be further investigated to prevent hazards to individuals undergoing WBV treatments.