738 resultados para broca-do-cacho
Resumo:
Para aumentar a viabilidade do uso da Classificação Internacional de Funcionalidade, Incapacidade e Saúde (CIF), core sets começaram a ser desenvolvidos, e objetivam estabelecer uma seleção de categorias adaptada para representar os padrões de avaliação multiprofissional de grupos específicos de pacientes. Com o objetivo de propor um core set da CIF para classificar a saúde física de idosos, formou-se uma comissão de especialistas para julgar o instrumento por meio da técnica Delphi, o que mostra a interface multidisciplinar do projeto. Finalizada a participação da comissão, o core set foi proposto contendo 30 categorias. Após aplicação em uma amostra com 340 idosos dos municípios de Natal/RN e Santa Cruz/RN, o core set foi submetido à análise fatorial, tendo ficado com 19 categorias. A análise ainda proporcionou gerar uma pontuação para cada idoso por meio do escore fatorial, tendo provado ser uma forma fidedigna e confiável de se pontuar um core set.
Resumo:
Through numerous technological advances in recent years along with the popularization of computer devices, the company is moving towards a paradigm “always connected”. Computer networks are everywhere and the advent of IPv6 paves the way for the explosion of the Internet of Things. This concept enables the sharing of data between computing machines and objects of day-to-day. One of the areas placed under Internet of Things are the Vehicular Networks. However, the information generated individually for a vehicle has no large amount and does not contribute to an improvement in transit, once information has been isolated. This proposal presents the Infostructure, a system that has to facilitate the efforts and reduce costs for development of applications context-aware to high-level semantic for the scenario of Internet of Things, which allows you to manage, store and combine the data in order to generate broader context. To this end we present a reference architecture, which aims to show the major components of the Infostructure. Soon after a prototype is presented which is used to validate our work reaches the level of contextualization desired high level semantic as well as a performance evaluation, which aims to evaluate the behavior of the subsystem responsible for managing contextual information on a large amount of data. After statistical analysis is performed with the results obtained in the evaluation. Finally, the conclusions of the work and some problems such as no assurance as to the integrity of the sensory data coming Infostructure, and future work that takes into account the implementation of other modules so that we can conduct tests in real environments are presented.
Resumo:
Cloud Computing is a paradigm that enables the access, in a simple and pervasive way, through the network, to shared and configurable computing resources. Such resources can be offered on demand to users in a pay-per-use model. With the advance of this paradigm, a single service offered by a cloud platform might not be enough to meet all the requirements of clients. Ergo, it is needed to compose services provided by different cloud platforms. However, current cloud platforms are not implemented using common standards, each one has its own APIs and development tools, which is a barrier for composing different services. In this context, the Cloud Integrator, a service-oriented middleware platform, provides an environment to facilitate the development and execution of multi-cloud applications. The applications are compositions of services, from different cloud platforms and, represented by abstract workflows. However, Cloud Integrator has some limitations, such as: (i) applications are locally executed; (ii) users cannot specify the application in terms of its inputs and outputs, and; (iii) experienced users cannot directly determine the concrete Web services that will perform the workflow. In order to deal with such limitations, this work proposes Cloud Stratus, a middleware platform that extends Cloud Integrator and offers different ways to specify an application: as an abstract workflow or a complete/partial execution flow. The platform enables the application deployment in cloud virtual machines, so that several users can access it through the Internet. It also supports the access and management of virtual machines in different cloud platforms and provides services monitoring mechanisms and assessment of QoS parameters. Cloud Stratus was validated through a case study that consists of an application that uses different services provided by different cloud platforms. Cloud Stratus was also evaluated through computing experiments that analyze the performance of its processes.
Resumo:
During the drilling of oil and natural gas are generated solid waste, liquid and gaseous. These solid fragments, which are known as cuttings, are carried to the surface through the drilling fluid. Furthermore, this fluid serves to cool the bit, keeping the internal pressure of the well, and others. This solid residue is very polluting, because it has incorporated beyond the drilling fluid, which has several chemical additives harmful to the environment, some heavy metals that are harmful to the environment, such as lead. To minimize the residue generated, are currently being studied numerous techniques to mitigate the problems that such waste can cause to the environment, like addition of cuttings in the composition of soil cement brick masonry construction, addition of cuttings on the clay matrix for the manufacture of solid masonry bricks and ceramic blocks and coprocessing of the cuttings in cement. So, the main objective of this work is the incorporation of cuttings drilling of oil wells, the cement slurry used in the cementing operation of the well. This cuttings used in this study, arising from the formation Pendências, was milled and separated in a sieve of 100 mesh. After grinding had a mean particle sike in order of 86 mm and crystal structure containing phases of quartz and calcite type, characteristic of the Portland cement. Were formulated and prepared slurries of cement with density 13 lb / gal, containing different concentrations of gravel, and realized characterization tests API SPEC 10A and RP 10B. Free water tests showed values lower than 5.9% and the rheological model that best described the behavior of the mixtures was the power. The results of compressive strength (10.3 MPa) and stability (Dr <0.5 lb / gal) had values within the set of operational procedures. Thus, the gravel from the drilling operation, may be used as binders in addition to Portland cement oil wells, in order to reuse this waste and reduce the cost of the cement paste.
Resumo:
Drilling fluids have fundamental importance in the petroleum activities, since they are responsible for remove the cuttings, maintain pressure and well stability, preventing collapse and inflow of fluid into the rock formation and maintain lubrication and cooling the drill. There are basically three types of drilling fluids: water-based, non-aqueous and aerated based. The water-based drilling fluid is widely used because it is less aggressive to the environment and provide excellent stability and inhibition (when the water based drilling fluid is a inhibition fluid), among other qualities. Produced water is generated simultaneously with oil during production and has high concentrations of metals and contaminants, so it’s necessary to treat for disposal this water. The produced water from the fields of Urucu-AM and Riacho da forquilha-RN have high concentrations of contaminants, metals and salts such as calcium and magnesium, complicating their treatment and disposal. Thus, the objective was to analyze the use of synthetic produced water with similar characteristics of produced water from Urucu-AM and Riacho da Forquilha-RN for formulate a water-based drilling mud, noting the influence of varying the concentration of calcium and magnesium into filtered and rheology tests. We conducted a simple 32 factorial experimental design for statistical modeling of data. The results showed that the varying concentrations of calcium and magnesium did not influence the rheology of the fluid, where in the plastic viscosity, apparent viscosity and the initial and final gels does not varied significantly. For the filtrate tests, calcium concentration in a linear fashion influenced chloride concentration, where when we have a higher concentration of calcium we have a higher the concentration of chloride in the filtrate. For the Urucu’s produced water based fluids, volume of filtrate was observed that the calcium concentration influences quadratically, this means that high calcium concentrations interfere with the power of the inhibitors used in the formulation of the filtered fluid. For Riacho’s produced water based fluid, Calcium’s influences is linear for volume of filtrate. The magnesium concentration was significant only for chloride concentration in a quadratic way just for Urucu’s produced water based fluids. The mud with maximum concentration of magnesium (9,411g/L), but minimal concentration of calcium (0,733g/L) showed good results. Therefore, a maximum water produced by magnesium concentration of 9,411g/L and the maximum calcium concentration of 0,733g/L can be used for formulating water-based drilling fluids, providing appropriate properties for this kind of fluid.
Resumo:
Wireless Sensor and Actuator Networks (WSAN) are a key component in Ubiquitous Computing Systems and have many applications in different knowledge domains. Programming for such networks is very hard and requires developers to know the available sensor platforms specificities, increasing the learning curve for developing WSAN applications. In this work, an MDA (Model-Driven Architecture) approach for WSAN applications development called ArchWiSeN is proposed. The goal of such approach is to facilitate the development task by providing: (i) A WSAN domain-specific language, (ii) a methodology for WSAN application development; and (iii) an MDA infrastructure composed of several software artifacts (PIM, PSMs and transformations). ArchWiSeN allows the direct contribution of domain experts in the WSAN application development without the need of specialized knowledge on WSAN platforms and, at the same time, allows network experts to manage the application requirements without the need for specific knowledge of the application domain. Furthermore, this approach also aims to enable developers to express and validate functional and non-functional requirements of the application, incorporate services offered by WSAN middleware platforms and promote reuse of the developed software artifacts. In this sense, this Thesis proposes an approach that includes all WSAN development stages for current and emerging scenarios through the proposed MDA infrastructure. An evaluation of the proposal was performed by: (i) a proof of concept encompassing three different scenarios performed with the usage of the MDA infrastructure to describe the WSAN development process using the application engineering process, (ii) a controlled experiment to assess the use of the proposed approach compared to traditional method of WSAN application development, (iii) the analysis of ArchWiSeN support of middleware services to ensure that WSAN applications using such services can achieve their requirements ; and (iv) systematic analysis of ArchWiSeN in terms of desired characteristics for MDA tool when compared with other existing MDA tools for WSAN.
Resumo:
High dependability, availability and fault-tolerance are open problems in Service-Oriented Architecture (SOA). The possibility of generating software applications by integrating services from heterogeneous domains, in a reliable way, makes worthwhile to face the challenges inherent to this paradigm. In order to ensure quality in service compositions, some research efforts propose the adoption of verification techniques to identify and correct errors. In this context, exception handling is a powerful mechanism to increase SOA quality. Several research works are concerned with mechanisms for exception propagation on web services, implemented in many languages and frameworks. However, to the extent of our knowledge, no works found evaluates these mechanisms in SOA with regard to the .NET framework. The main contribution of this paper is to evaluate and to propose exception propagation mechanisms in SOA to applications developed within the .NET framework. In this direction, this work: (i)extends a previous study, showing the need to propose a solution to the exception propagation in SOA to applications developed in .NET, and (ii) show a solution, based in model obtained from the results found in (i) and that will be applied in real cases through of faults injections and AOP techniques.
Resumo:
Cloud computing can be defined as a distributed computational model by through resources (hardware, storage, development platforms and communication) are shared, as paid services accessible with minimal management effort and interaction. A great benefit of this model is to enable the use of various providers (e.g a multi-cloud architecture) to compose a set of services in order to obtain an optimal configuration for performance and cost. However, the multi-cloud use is precluded by the problem of cloud lock-in. The cloud lock-in is the dependency between an application and a cloud platform. It is commonly addressed by three strategies: (i) use of intermediate layer that stands to consumers of cloud services and the provider, (ii) use of standardized interfaces to access the cloud, or (iii) use of models with open specifications. This paper outlines an approach to evaluate these strategies. This approach was performed and it was found that despite the advances made by these strategies, none of them actually solves the problem of lock-in cloud. In this sense, this work proposes the use of Semantic Web to avoid cloud lock-in, where RDF models are used to specify the features of a cloud, which are managed by SPARQL queries. In this direction, this work: (i) presents an evaluation model that quantifies the problem of cloud lock-in, (ii) evaluates the cloud lock-in from three multi-cloud solutions and three cloud platforms, (iii) proposes using RDF and SPARQL on management of cloud resources, (iv) presents the cloud Query Manager (CQM), an SPARQL server that implements the proposal, and (v) comparing three multi-cloud solutions in relation to CQM on the response time and the effectiveness in the resolution of cloud lock-in.
Resumo:
Nicotine administration in humans and rodents enhances memory and attention, and also has a positive effect in Alzheimer's Disease. The Medial Septum / Diagonal Band of Broca complex (MS/DBB) – a main cholinergic system – massively projects to the hippocampus through the fimbria-fornix, and this pathway is called the septohippocampal pathway. It has been demonstrated that the MS/DBB acts directly on the local field potential (LFP) rhythmic organization of the hippocampus, especially in the rhythmogenesis of Theta (4-8Hz) – an oscillation intrinsically linked to hippocampus mnemonic function. In vitro experiments gave evidence that nicotine applied to the MS/DBB generates a local network Theta rhythm within the MS/DBB. Thus, the present study proposes to elucidate the function of nicotine in the MS/DBB on the septo-hippocampal pathway. In vivo experiments compared the effect of MS/DBB microinfusion of saline (n=5) and nicotine (n=8) on Ketamine/Xylazine anaesthetized mice. We observed power spectrum density in the Gamma range (35 to 55 Hz) increasing in both structures (Wilcoxon Rank-Sum test, p=0.038) but with no change in coherence between these structures in the same range (Wilcoxon Rank-Sum test, p=0.60). There was also a decrease in power of the ketamineinduced Delta oscillation (1 to 3 Hz). We also performed in vitro experiments on the effect of nicotine on membrane voltage and action potential. We patch-clamped 22 neurons in current-clamp mode; 12 neurons were responsive to nicotine, half of them increased firing rate and other 6 decreased, and they significantly differed in action potential threshold (-47.3±0.9 mV vs. -41±1.9 mV, respectively, p=0.007) and halfwidth time (1.6±0.08 ms vs. 2±0.12 ms, respectively, p=0.01). Furthermore, we performed another set of in vitro experiments concerning the connectivity of the three major neuronal populations of MS/DBB that use acetylcholine, GABA or glutamate as neurotransmitter. Paired patch-clamp recordings found that glutamatergic and GABAergic neurons realize intra-septal connections that produce sizable currents in MS/DBB postsynaptic neurons. The probability of connectivity between different neuronal populations gave rise to a MS/DBB topology that was implemented in a realistic model, which corroborates that the network is highly sensitive to the generation of Gamma rhythm. Together, the data available in the full set of experiments suggests that nicotine may act as a cognitive enhancer, by inducing gamma oscillation in the local circuitry of the MS/DBB.
Resumo:
Multi-Cloud Applications are composed of services offered by multiple cloud platforms where the user/developer has full knowledge of the use of such platforms. The use of multiple cloud platforms avoids the following problems: (i) vendor lock-in, which is dependency on the application of a certain cloud platform, which is prejudicial in the case of degradation or failure of platform services, or even price increasing on service usage; (ii) degradation or failure of the application due to fluctuations in quality of service (QoS) provided by some cloud platform, or even due to a failure of any service. In multi-cloud scenario is possible to change a service in failure or with QoS problems for an equivalent of another cloud platform. So that an application can adopt the perspective multi-cloud is necessary to create mechanisms that are able to select which cloud services/platforms should be used in accordance with the requirements determined by the programmer/user. In this context, the major challenges in terms of development of such applications include questions such as: (i) the choice of which underlying services and cloud computing platforms should be used based on the defined user requirements in terms of functionality and quality (ii) the need to continually monitor the dynamic information (such as response time, availability, price, availability), related to cloud services, in addition to the wide variety of services, and (iii) the need to adapt the application if QoS violations affect user defined requirements. This PhD thesis proposes an approach for dynamic adaptation of multi-cloud applications to be applied when a service is unavailable or when the requirements set by the user/developer point out that other available multi-cloud configuration meets more efficiently. Thus, this work proposes a strategy composed of two phases. The first phase consists of the application modeling, exploring the similarities representation capacity and variability proposals in the context of the paradigm of Software Product Lines (SPL). In this phase it is used an extended feature model to specify the cloud service configuration to be used by the application (similarities) and the different possible providers for each service (variability). Furthermore, the non-functional requirements associated with cloud services are specified by properties in this model by describing dynamic information about these services. The second phase consists of an autonomic process based on MAPE-K control loop, which is responsible for selecting, optimally, a multicloud configuration that meets the established requirements, and perform the adaptation. The adaptation strategy proposed is independent of the used programming technique for performing the adaptation. In this work we implement the adaptation strategy using various programming techniques such as aspect-oriented programming, context-oriented programming and components and services oriented programming. Based on the proposed steps, we tried to assess the following: (i) the process of modeling and the specification of non-functional requirements can ensure effective monitoring of user satisfaction; (ii) if the optimal selection process presents significant gains compared to sequential approach; and (iii) which techniques have the best trade-off when compared efforts to development/modularity and performance.
Resumo:
The spread of wireless networks and growing proliferation of mobile devices require the development of mobility control mechanisms to support the different demands of traffic in different network conditions. A major obstacle to developing this kind of technology is the complexity involved in handling all the information about the large number of Moving Objects (MO), as well as the entire signaling overhead required to manage these procedures in the network. Despite several initiatives have been proposed by the scientific community to address this issue they have not proved to be effective since they depend on the particular request of the MO that is responsible for triggering the mobility process. Moreover, they are often only guided by wireless medium statistics, such as Received Signal Strength Indicator (RSSI) of the candidate Point of Attachment (PoA). Thus, this work seeks to develop, evaluate and validate a sophisticated communication infrastructure for Wireless Networking for Moving Objects (WiNeMO) systems by making use of the flexibility provided by the Software-Defined Networking (SDN) paradigm, where network functions are easily and efficiently deployed by integrating OpenFlow and IEEE 802.21 standards. For purposes of benchmarking, the analysis was conducted in the control and data planes aspects, which demonstrate that the proposal significantly outperforms typical IPbased SDN and QoS-enabled capabilities, by allowing the network to handle the multimedia traffic with optimal Quality of Service (QoS) transport and acceptable Quality of Experience (QoE) over time.
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.