991 resultados para Point Cloud
Resumo:
Two 7-day mesocosm experiments were conducted in October 2012 at the Instituto Nacional de Desenvolvimento das Pescas (INDP), Mindelo, Cape Verde. Surface water was collected at night before the start of the respective experiment with RV Islândia south of São Vicente (16°44.4'N, 25°09.4'W) and transported to shore using four 600L food safe intermediate bulk containers. Sixteen mesocosm bags were distributed in four flow-through water baths and shaded with blue, transparent lids to approximately 20% of surface irradiation. Mesocosm bags were filled from the containers by gravity, using a submerged hose to minimize bubbles. The accurate volume inside the individual bags was calculated after addition of 1.5 mmol silicate and measuring the resulting silicate concentration. The volume ranged from 105.5 to 145 L. The experimental manipulation comprised addition of different amounts of inorganic N and P. In the first experiment, the P supply was changed at constant N supply in thirteen of the sixteen units, while in the second experiment the N supply was changed at constant P supply in twelve of the sixteen units. In addition to this, "cornerpoints" were chosen that were repeated during both experiments. Four cornerpoints should have been repeated, but setting the nutrient levels in one mesocosm was not succesfull and therefore this mesocosm also was set at the center point conditions. Experimental treatments were evenly distributed between the four water baths. Initial sampling of the mesocosms on day 1 of each run was conducted between 9:45 and 11:30. After nutrient manipulation, sampling was conducted on a daily basis between 09:00 and 10:30 for days 2 to 8.
Resumo:
Una Red de Procesadores Evolutivos o NEP (por sus siglas en ingles), es un modelo computacional inspirado por el modelo evolutivo de las celulas, específicamente por las reglas de multiplicación de las mismas. Esta inspiración hace que el modelo sea una abstracción sintactica de la manipulation de information de las celulas. En particu¬lar, una NEP define una maquina de cómputo teorica capaz de resolver problemas NP completos de manera eficiente en tóerminos de tiempo. En la praóctica, se espera que las NEP simuladas en móaquinas computacionales convencionales puedan resolver prob¬lemas reales complejos (que requieran ser altamente escalables) a cambio de una alta complejidad espacial. En el modelo NEP, las cóelulas estóan representadas por palabras que codifican sus secuencias de ADN. Informalmente, en cualquier momento de cómputo del sistema, su estado evolutivo se describe como un coleccion de palabras, donde cada una de ellas representa una celula. Estos momentos fijos de evolucion se denominan configuraciones. De manera similar al modelo biologico, las palabras (celulas) mutan y se dividen en base a bio-operaciones sencillas, pero solo aquellas palabras aptas (como ocurre de forma parecida en proceso de selection natural) seran conservadas para la siguiente configuracióon. Una NEP como herramienta de computation, define una arquitectura paralela y distribuida de procesamiento simbolico, en otras palabras, una red de procesadores de lenguajes. Desde el momento en que el modelo fue propuesto a la comunidad científica en el año 2001, múltiples variantes se han desarrollado y sus propiedades respecto a la completitud computacional, eficiencia y universalidad han sido ampliamente estudiadas y demostradas. En la actualidad, por tanto, podemos considerar que el modelo teórico NEP se encuentra en el estadio de la madurez. La motivación principal de este Proyecto de Fin de Grado, es proponer una aproxi-mación práctica que permita dar un salto del modelo teórico NEP a una implantación real que permita su ejecucion en plataformas computacionales de alto rendimiento, con el fin de solucionar problemas complejos que demanda la sociedad actual. Hasta el momento, las herramientas desarrolladas para la simulation del modelo NEP, si bien correctas y con resultados satisfactorios, normalmente estón atadas a su entorno de ejecucion, ya sea el uso de hardware específico o implementaciones particulares de un problema. En este contexto, el propósito fundamental de este trabajo es el desarrollo de Nepfix, una herramienta generica y extensible para la ejecucion de cualquier algo¬ritmo de un modelo NEP (o alguna de sus variantes), ya sea de forma local, como una aplicación tradicional, o distribuida utilizando los servicios de la nube. Nepfix es una aplicacion software desarrollada durante 7 meses y que actualmente se encuentra en su segunda iteration, una vez abandonada la fase de prototipo. Nepfix ha sido disenada como una aplicacion modular escrita en Java 8 y autocontenida, es decir, no requiere de un entorno de ejecucion específico (cualquier maquina virtual de Java es un contenedor vólido). Nepfix contiene dos componentes o móodulos. El primer móodulo corresponde a la ejecución de una NEP y es por lo tanto, el simulador. Para su desarrollo, se ha tenido en cuenta el estado actual del modelo, es decir, las definiciones de los procesadores y filtros mas comunes que conforman la familia del modelo NEP. Adicionalmente, este componente ofrece flexibilidad en la ejecucion, pudiendo ampliar las capacidades del simulador sin modificar Nepfix, usando para ello un lenguaje de scripting. Dentro del desarrollo de este componente, tambióen se ha definido un estóandar de representacióon del modelo NEP basado en el formato JSON y se propone una forma de representation y codificación de las palabras, necesaria para la comunicación entre servidores. Adicional-mente, una característica importante de este componente, es que se puede considerar una aplicacion aislada y por tanto, la estrategia de distribution y ejecución son total-mente independientes. El segundo moódulo, corresponde a la distribucióon de Nepfix en la nube. Este de-sarrollo es el resultado de un proceso de i+D, que tiene una componente científica considerable. Vale la pena resaltar el desarrollo de este modulo no solo por los resul-tados prócticos esperados, sino por el proceso de investigation que se se debe abordar con esta nueva perspectiva para la ejecución de sistemas de computación natural. La principal característica de las aplicaciones que se ejecutan en la nube es que son gestionadas por la plataforma y normalmente se encapsulan en un contenedor. En el caso de Nepfix, este contenedor es una aplicacion Spring que utiliza el protocolo HTTP o AMQP para comunicarse con el resto de instancias. Como valor añadido, Nepfix aborda dos perspectivas de implementation distintas (que han sido desarrolladas en dos iteraciones diferentes) del modelo de distribution y ejecucion, que tienen un impacto muy significativo en las capacidades y restricciones del simulador. En concreto, la primera iteration utiliza un modelo de ejecucion asincrono. En esta perspectiva asincrona, los componentes de la red NEP (procesadores y filtros) son considerados como elementos reactivos a la necesidad de procesar una palabra. Esta implementation es una optimization de una topologia comun en el modelo NEP que permite utilizar herramientas de la nube para lograr un escalado transparente (en lo ref¬erente al balance de carga entre procesadores) pero produce efectos no deseados como indeterminacion en el orden de los resultados o imposibilidad de distribuir eficiente-mente redes fuertemente interconectadas. Por otro lado, la segunda iteration corresponde al modelo de ejecucion sincrono. Los elementos de una red NEP siguen un ciclo inicio-computo-sincronizacion hasta que el problema se ha resuelto. Esta perspectiva sincrona representa fielmente al modelo teórico NEP pero el proceso de sincronizacion es costoso y requiere de infraestructura adicional. En concreto, se requiere un servidor de colas de mensajes RabbitMQ. Sin embargo, en esta perspectiva los beneficios para problemas suficientemente grandes superan a los inconvenientes, ya que la distribuciín es inmediata (no hay restricciones), aunque el proceso de escalado no es trivial. En definitiva, el concepto de Nepfix como marco computacional se puede considerar satisfactorio: la tecnología es viable y los primeros resultados confirman que las carac-terísticas que se buscaban originalmente se han conseguido. Muchos frentes quedan abiertos para futuras investigaciones. En este documento se proponen algunas aproxi-maciones a la solucion de los problemas identificados como la recuperacion de errores y la division dinamica de una NEP en diferentes subdominios. Por otra parte, otros prob-lemas, lejos del alcance de este proyecto, quedan abiertos a un futuro desarrollo como por ejemplo, la estandarización de la representación de las palabras y optimizaciones en la ejecucion del modelo síncrono. Finalmente, algunos resultados preliminares de este Proyecto de Fin de Grado han sido presentados recientemente en formato de artículo científico en la "International Work-Conference on Artificial Neural Networks (IWANN)-2015" y publicados en "Ad-vances in Computational Intelligence" volumen 9094 de "Lecture Notes in Computer Science" de Springer International Publishing. Lo anterior, es una confirmation de que este trabajo mas que un Proyecto de Fin de Grado, es solo el inicio de un trabajo que puede tener mayor repercusion en la comunidad científica. Abstract Network of Evolutionary Processors -NEP is a computational model inspired by the evolution of cell populations, which might model some properties of evolving cell communities at the syntactical level. NEP defines theoretical computing devices able to solve NP complete problems in an efficient manner. In this model, cells are represented by words which encode their DNA sequences. Informally, at any moment of time, the evolutionary system is described by a collection of words, where each word represents one cell. Cells belong to species and their community evolves according to mutations and division which are defined by operations on words. Only those cells are accepted as surviving (correct) ones which are represented by a word in a given set of words, called the genotype space of the species. This feature is analogous with the natural process of evolution. Formally, NEP is based on an architecture for parallel and distributed processing, in other words, a network of language processors. Since the date when NEP was pro¬posed, several extensions and variants have appeared engendering a new set of models named Networks of Bio-inspired Processors (NBP). During this time, several works have proved the computational power of NBP. Specifically, their efficiency, universality, and computational completeness have been thoroughly investigated. Therefore, we can say that the NEP model has reached its maturity. The main motivation for this End of Grade project (EOG project in short) is to propose a practical approximation that allows to close the gap between theoretical NEP model and a practical implementation in high performing computational platforms in order to solve some of high the high complexity problems society requires today. Up until now tools developed to simulate NEPs, while correct and successful, are usu¬ally tightly coupled to the execution environment, using specific software frameworks (Hadoop) or direct hardware usage (GPUs). Within this context the main purpose of this work is the development of Nepfix, a generic and extensible tool that aims to execute algorithms based on NEP model and compatible variants in a local way, similar to a traditional application or in a distributed cloud environment. Nepfix as an application was developed during a 7 month cycle and is undergoing its second iteration once the prototype period was abandoned. Nepfix is designed as a modular self-contained application written in Java 8, that is, no additional external dependencies are required and it does not rely on an specific execution environment, any JVM is a valid container. Nepfix is made of two components or modules. The first module corresponds to the NEP execution and therefore simulation. During the development the current state of the theoretical model was used as a reference including most common filters and processors. Additionally extensibility is provided by the use of Python as a scripting language to run custom logic. Along with the simulation a definition language for NEP has been defined based on JSON as well as a mechanisms to represent words and their possible manipulations. NEP simulator is isolated from distribution and as mentioned before different applications that include it as a dependency are possible, the distribution of NEPs is an example of this. The second module corresponds to executing Nepfix in the cloud. The development carried a heavy R&D process since this front was not explored by other research groups until now. It's important to point out that the development of this module is not focused on results at this point in time, instead we focus on feasibility and discovery of this new perspective to execute natural computing systems and NEPs specifically. The main properties of cloud applications is that they are managed by the platform and are encapsulated in a container. For Nepfix a Spring application becomes the container and the HTTP or AMQP protocols are used for communication with the rest of the instances. Different execution perspectives were studied, namely asynchronous and synchronous models were developed for solving different kind of problems using NEPs. Different limitations and restrictions manifest in both models and are explored in detail in the respective chapters. In conclusion we can consider that Nepfix as a computational framework is suc-cessful: Cloud technology is ready for the challenge and the first results reassure that the properties Nepfix project pursued were met. Many investigation branches are left open for future investigations. In this EOG implementation guidelines are proposed for some of them like error recovery or dynamic NEP splitting. On the other hand other interesting problems that were not in the scope of this project were identified during development like word representation standardization or NEP model optimizations. As a confirmation that the results of this work can be useful to the scientific com-munity a preliminary version of this project was published in The International Work- Conference on Artificial Neural Networks (IWANN) in May 2015. Development has not stopped since that point and while Nepfix in it's current state can not be consid¬ered a final product the most relevant ideas, possible problems and solutions that were produced during the seven months development cycle are worthy to be gathered and presented giving a meaning to this EOG work.
Resumo:
Cloud Agile Manufacturing is a new paradigm proposed in this article. The main objective of Cloud Agile Manufacturing is to offer industrial production systems as a service. Thus users can access any functionality available in the cloud of manufacturing (process design, production, management, business integration, factories virtualization, etc.) without knowledge — or at least without having to be experts — in managing the required resources. The proposal takes advantage of many of the benefits that can offer technologies and models like: Business Process Management (BPM), Cloud Computing, Service Oriented Architectures (SOA) and Ontologies. To develop the proposal has been taken as a starting point the Semantic Industrial Machinery as a Service (SIMaaS) proposed in previous work. This proposal facilitates the effective integration of industrial machinery in a computing environment, offering it as a network service. The work also includes an analysis of the benefits and disadvantages of the proposal.
Resumo:
This paper proposes a new manufacturing paradigm, we call Cloud Agile Manufacturing, and whose principal objective is to offer industrial production systems as a service. Thus users can access any functionality available in the cloud of manufacturing (process design, production, management, business integration, factories virtualization, etc.) without knowledge — or at least without having to be experts — in managing the required resources. The proposal takes advantage of many of the benefits that can offer technologies and models like: Business Process Management (BPM), Cloud Computing, Service Oriented Architectures (SOA) and Ontologies. To develop the proposal has been taken as a starting point the Semantic Industrial Machinery as a Service (SIMaaS) proposed in previous work. This proposal facilitates the effective integration of industrial machinery in a computing environment, offering it as a network service. The work also includes an analysis of the benefits and disadvantages of the proposal.
Resumo:
These days as we are facing extremely powerful attacks on servers over the Internet (say, by the Advanced Persistent Threat attackers or by Surveillance by powerful adversary), Shamir has claimed that “Cryptography is Ineffective”and some understood it as “Cryptography is Dead!” In this talk I will discuss the implications on cryptographic systems design while facing such strong adversaries. Is crypto dead or we need to design it better, taking into account, mathematical constraints, but also systems vulnerability constraints. Can crypto be effective at all when your computer or your cloud is penetrated? What is lost and what can be saved? These are very basic issues at this point of time, when we are facing potential loss of privacy and security.
Resumo:
Multi-Cloud Applications are composed of services offered by multiple cloud platforms where the user/developer has full knowledge of the use of such platforms. The use of multiple cloud platforms avoids the following problems: (i) vendor lock-in, which is dependency on the application of a certain cloud platform, which is prejudicial in the case of degradation or failure of platform services, or even price increasing on service usage; (ii) degradation or failure of the application due to fluctuations in quality of service (QoS) provided by some cloud platform, or even due to a failure of any service. In multi-cloud scenario is possible to change a service in failure or with QoS problems for an equivalent of another cloud platform. So that an application can adopt the perspective multi-cloud is necessary to create mechanisms that are able to select which cloud services/platforms should be used in accordance with the requirements determined by the programmer/user. In this context, the major challenges in terms of development of such applications include questions such as: (i) the choice of which underlying services and cloud computing platforms should be used based on the defined user requirements in terms of functionality and quality (ii) the need to continually monitor the dynamic information (such as response time, availability, price, availability), related to cloud services, in addition to the wide variety of services, and (iii) the need to adapt the application if QoS violations affect user defined requirements. This PhD thesis proposes an approach for dynamic adaptation of multi-cloud applications to be applied when a service is unavailable or when the requirements set by the user/developer point out that other available multi-cloud configuration meets more efficiently. Thus, this work proposes a strategy composed of two phases. The first phase consists of the application modeling, exploring the similarities representation capacity and variability proposals in the context of the paradigm of Software Product Lines (SPL). In this phase it is used an extended feature model to specify the cloud service configuration to be used by the application (similarities) and the different possible providers for each service (variability). Furthermore, the non-functional requirements associated with cloud services are specified by properties in this model by describing dynamic information about these services. The second phase consists of an autonomic process based on MAPE-K control loop, which is responsible for selecting, optimally, a multicloud configuration that meets the established requirements, and perform the adaptation. The adaptation strategy proposed is independent of the used programming technique for performing the adaptation. In this work we implement the adaptation strategy using various programming techniques such as aspect-oriented programming, context-oriented programming and components and services oriented programming. Based on the proposed steps, we tried to assess the following: (i) the process of modeling and the specification of non-functional requirements can ensure effective monitoring of user satisfaction; (ii) if the optimal selection process presents significant gains compared to sequential approach; and (iii) which techniques have the best trade-off when compared efforts to development/modularity and performance.
Resumo:
Provenance plays a pivotal in tracing the origin of something and determining how and why something had occurred. With the emergence of the cloud and the benefits it encompasses, there has been a rapid proliferation of services being adopted by commercial and government sectors. However, trust and security concerns for such services are on an unprecedented scale. Currently, these services expose very little internal working to their customers; this can cause accountability and compliance issues especially in the event of a fault or error, customers and providers are left to point finger at each other. Provenance-based traceability provides a mean to address part of this problem by being able to capture and query events occurred in the past to understand how and why it took place. However, due to the complexity of the cloud infrastructure, the current provenance models lack the expressibility required to describe the inner-working of a cloud service. For a complete solution, a provenance-aware policy language is also required for operators and users to define policies for compliance purpose. The current policy standards do not cater for such requirement. To address these issues, in this paper we propose a provenance (traceability) model cProv, and a provenance-aware policy language (cProvl) to capture traceability data, and express policies for validating against the model. For implementation, we have extended the XACML3.0 architecture to support provenance, and provided a translator that converts cProvl policy and request into XACML type.
Resumo:
Avec l’avènement des objets connectés, la bande passante nécessaire dépasse la capacité des interconnections électriques et interface sans fils dans les réseaux d’accès mais aussi dans les réseaux coeurs. Des systèmes photoniques haute capacité situés dans les réseaux d’accès utilisant la technologie radio sur fibre systèmes ont été proposés comme solution dans les réseaux sans fil de 5e générations. Afin de maximiser l’utilisation des ressources des serveurs et des ressources réseau, le cloud computing et des services de stockage sont en cours de déploiement. De cette manière, les ressources centralisées pourraient être diffusées de façon dynamique comme l’utilisateur final le souhaite. Chaque échange nécessitant une synchronisation entre le serveur et son infrastructure, une couche physique optique permet au cloud de supporter la virtualisation des réseaux et de les définir de façon logicielle. Les amplificateurs à semi-conducteurs réflectifs (RSOA) sont une technologie clé au niveau des ONU(unité de communications optiques) dans les réseaux d’accès passif (PON) à fibres. Nous examinons ici la possibilité d’utiliser un RSOA et la technologie radio sur fibre pour transporter des signaux sans fil ainsi qu’un signal numérique sur un PON. La radio sur fibres peut être facilement réalisée grâce à l’insensibilité a la longueur d’onde du RSOA. Le choix de la longueur d’onde pour la couche physique est cependant choisi dans les couches 2/3 du modèle OSI. Les interactions entre la couche physique et la commutation de réseaux peuvent être faites par l’ajout d’un contrôleur SDN pour inclure des gestionnaires de couches optiques. La virtualisation réseau pourrait ainsi bénéficier d’une couche optique flexible grâce des ressources réseau dynamique et adaptée. Dans ce mémoire, nous étudions un système disposant d’une couche physique optique basé sur un RSOA. Celle-ci nous permet de façon simultanée un envoi de signaux sans fil et le transport de signaux numérique au format modulation tout ou rien (OOK) dans un système WDM(multiplexage en longueur d’onde)-PON. Le RSOA a été caractérisé pour montrer sa capacité à gérer une plage dynamique élevée du signal sans fil analogique. Ensuite, les signaux RF et IF du système de fibres sont comparés avec ses avantages et ses inconvénients. Finalement, nous réalisons de façon expérimentale une liaison point à point WDM utilisant la transmission en duplex intégral d’un signal wifi analogique ainsi qu’un signal descendant au format OOK. En introduisant deux mélangeurs RF dans la liaison montante, nous avons résolu le problème d’incompatibilité avec le système sans fil basé sur le TDD (multiplexage en temps duplexé).
Resumo:
Part 5: Service Orientation in Collaborative Networks
Resumo:
Trees from tropical montane cloud forest (TMCF) display very dynamic patterns of water use. They are capable of downwards water transport towards the soil during leaf-wetting events, likely a consequence of foliar water uptake (FWU), as well as high rates of night-time transpiration (Enight) during drier nights. These two processes might represent important sources of water losses and gains to the plant, but little is known about the environmental factors controlling these water fluxes. We evaluated how contrasting atmospheric and soil water conditions control diurnal, nocturnal and seasonal dynamics of sap flow in Drimys brasiliensis (Miers), a common Neotropical cloud forest species. We monitored the seasonal variation of soil water content, micrometeorological conditions and sap flow of D. brasiliensis trees in the field during wet and dry seasons. We also conducted a greenhouse experiment exposing D. brasiliensis saplings under contrasting soil water conditions to deuterium-labelled fog water. We found that during the night D. brasiliensis possesses heightened stomatal sensitivity to soil drought and vapour pressure deficit, which reduces night-time water loss. Leaf-wetting events had a strong suppressive effect on tree transpiration (E). Foliar water uptake increased in magnitude with drier soil and during longer leaf-wetting events. The difference between diurnal and nocturnal stomatal behaviour in D. brasiliensis could be attributed to an optimization of carbon gain when leaves are dry, as well as minimization of nocturnal water loss. The leaf-wetting events on the other hand seem important to D. brasiliensis water balance, especially during soil droughts, both by suppressing tree transpiration (E) and as a small additional water supply through FWU. Our results suggest that decreases in leaf-wetting events in TMCF might increase D. brasiliensis water loss and decrease its water gains, which could compromise its ecophysiological performance and survival during dry periods.
Resumo:
This study aimed to describe and compare the ventilation behavior during an incremental test utilizing three mathematical models and to compare the feature of ventilation curve fitted by the best mathematical model between aerobically trained (TR) and untrained ( UT) men. Thirty five subjects underwent a treadmill test with 1 km.h(-1) increases every minute until exhaustion. Ventilation averages of 20 seconds were plotted against time and fitted by: bi-segmental regression model (2SRM); three-segmental regression model (3SRM); and growth exponential model (GEM). Residual sum of squares (RSS) and mean square error (MSE) were calculated for each model. The correlations between peak VO2 (VO2PEAK), peak speed (Speed(PEAK)), ventilatory threshold identified by the best model (VT2SRM) and the first derivative calculated for workloads below (moderate intensity) and above (heavy intensity) VT2SRM were calculated. The RSS and MSE for GEM were significantly higher (p < 0.01) than for 2SRM and 3SRM in pooled data and in UT, but no significant difference was observed among the mathematical models in TR. In the pooled data, the first derivative of moderate intensities showed significant negative correlations with VT2SRM (r = -0.58; p < 0.01) and Speed(PEAK) (r = -0.46; p < 0.05) while the first derivative of heavy intensities showed significant negative correlation with VT2SRM (r = -0.43; p < 0.05). In UT group the first derivative of moderate intensities showed significant negative correlations with VT2SRM (r = -0.65; p < 0.05) and Speed(PEAK) (r = -0.61; p < 0.05), while the first derivative of heavy intensities showed significant negative correlation with VT2SRM (r= -0.73; p < 0.01), Speed(PEAK) (r = -0.73; p < 0.01) and VO2PEAK (r = -0.61; p < 0.05) in TR group. The ventilation behavior during incremental treadmill test tends to show only one threshold. UT subjects showed a slower ventilation increase during moderate intensities while TR subjects showed a slower ventilation increase during heavy intensities.
Resumo:
A compact frequency standard based on an expanding cold (133)CS cloud is under development in our laboratory. In a first experiment, Cs cold atoms were prepared by a magneto-optical trap in a vapor cell, and a microwave antenna was used to transmit the radiation for the clock transition. The signal obtained from fluorescence of the expanding cold atoms cloud is used to lock a microwave chain. In this way the overall system stability is evaluated. A theoretical model based on a two-level system interacting with the two microwave pulses enables interpretation for the observed features, especially the poor Ramsey fringes contrast. (C) 2008 Optical Society of America.
Resumo:
The purpose of this study was to determine if performing isometric 3-point kneeling exercises on a Swiss ball influenced the isometric force output and EMG activities of the shoulder muscles when compared with performing the same exercises on a stable base of support. Twenty healthy adults performed the isometric 3-point kneeling exercises with the hand placed either on a stable surface or on a Swiss ball. Surface EMG was recorded from the posterior deltoid, pectoralis major, biceps brachii, triceps brachii, upper trapezius, and serratus anterior muscles using surface differential electrodes. All EMG data were reported as percentages of the average root mean square (RMS) values obtained in maximum voluntary contractions for each muscle studied. The highest load value was obtained during exercise on a stable surface. A significant increase was observed in the activation of glenohumeral muscles during exercises on a Swiss ball. However, there were no differences in EMG activities of the scapulothoracic muscles. These results suggest that exercises performed on unstable surfaces may provide muscular activity levels similar to those performed on stable surfaces, without the need to apply greater external loads to the musculoskeletal system. Therefore, exercises on unstable surfaces may be useful during the process of tissue regeneration.