941 resultados para end user computing application streaming horizon workspace portalvmware view
Resumo:
In recent years, vehicular cloud computing (VCC) has emerged as a new technology which is being used in wide range of applications in the area of multimedia-based healthcare applications. In VCC, vehicles act as the intelligent machines which can be used to collect and transfer the healthcare data to the local, or global sites for storage, and computation purposes, as vehicles are having comparatively limited storage and computation power for handling the multimedia files. However, due to the dynamic changes in topology, and lack of centralized monitoring points, this information can be altered, or misused. These security breaches can result in disastrous consequences such as-loss of life or financial frauds. Therefore, to address these issues, a learning automata-assisted distributive intrusion detection system is designed based on clustering. Although there exist a number of applications where the proposed scheme can be applied but, we have taken multimedia-based healthcare application for illustration of the proposed scheme. In the proposed scheme, learning automata (LA) are assumed to be stationed on the vehicles which take clustering decisions intelligently and select one of the members of the group as a cluster-head. The cluster-heads then assist in efficient storage and dissemination of information through a cloud-based infrastructure. To secure the proposed scheme from malicious activities, standard cryptographic technique is used in which the auotmaton learns from the environment and takes adaptive decisions for identification of any malicious activity in the network. A reward and penalty is given by the stochastic environment where an automaton performs its actions so that it updates its action probability vector after getting the reinforcement signal from the environment. The proposed scheme was evaluated using extensive simulations on ns-2 with SUMO. The results obtained indicate that the proposed scheme yields an improvement of 10 % in detection rate of malicious nodes when compared with the existing schemes.
Resumo:
Potentiometric sensors are typically unable to carry out on-site monitoring of environmental drug contaminants because of their high limits of detection (LODs). Designing a novel ligand material for the target analyte and managing the composition of the internal reference solution have been the strategies employed here to produce for the first time a potentiometric-based direct reading method for an environmental drug contaminant. This concept has been applied to sulfamethoxazole (SMX), one of the many antibiotics used in aquaculture practices that may occur in environmental waters. The novel ligand has been produced by imprinting SMX on the surface of graphitic carbon nanostructures (CN) < 500 nm. The imprinted carbon nanostructures (ICN) were dispersed in plasticizer and entrapped in a PVC matrix that included (or not) a small amount of a lipophilic additive. The membrane composition was optimized on solid-contact electrodes, allowing near-Nernstian responses down to 5.2 μg/mL and detecting 1.6 μg/mL. The membranes offered good selectivity against most of the ionic compounds in environmental water. The best membrane cocktail was applied on the smaller end of a 1000 μL micropipette tip made of polypropylene. The tip was then filled with inner reference solution containing SMX and chlorate (as interfering compound). The corresponding concentrations were studied for 1 × 10−5 to 1 × 10−10 and 1 × 10−3 to 1 × 10−8 mol/L. The best condition allowed the detection of 5.92 ng/L (or 2.3 × 10−8 mol/L) SMX for a sub-Nernstian slope of −40.3 mV/decade from 5.0 × 10−8 to 2.4 × 10−5 mol/L.
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
Resumo:
20th International Conference on Reliable Software Technologies - Ada-Europe 2015 (Ada-Europe 2015), 22 to 26, Jun, 2015, Madrid, Spain.
Resumo:
IEEE International Conference on Pervasive Computing and Communications (PerCom). 23 to 26, Mar, 2015, PhD Forum. Saint Louis, U.S.A..
Resumo:
Ao longo dos últimos anos, os scanners 3D têm tido uma utilização crescente nas mais variadas áreas. Desde a Medicina à Arqueologia, passando pelos vários tipos de indústria, ´e possível identificar aplicações destes sistemas. Essa crescente utilização deve-se, entre vários factores, ao aumento dos recursos computacionais, à simplicidade e `a diversidade das técnicas existentes, e `as vantagens dos scanners 3D comparativamente com outros sistemas. Estas vantagens são evidentes em áreas como a Medicina Forense, onde a fotografia, tradicionalmente utilizada para documentar objectos e provas, reduz a informação adquirida a duas dimensões. Apesar das vantagens associadas aos scanners 3D, um factor negativo é o preço elevado. No âmbito deste trabalho pretendeu-se desenvolver um scanner 3D de luz estruturada económico e eficaz, e um conjunto de algoritmos para o controlo do scanner, para a reconstrução de superfícies de estruturas analisadas, e para a validação dos resultados obtidos. O scanner 3D implementado ´e constituído por uma câmara e por um projector de vídeo ”off-the-shelf”, e por uma plataforma rotativa desenvolvida neste trabalho. A função da plataforma rotativa consiste em automatizar o scanner de modo a diminuir a interação dos utilizadores. Os algoritmos foram desenvolvidos recorrendo a pacotes de software open-source e a ferramentas gratuitas. O scanner 3D foi utilizado para adquirir informação 3D de um crânio, e o algoritmo para reconstrução de superfícies permitiu obter superfícies virtuais do crânio. Através do algoritmo de validação, as superfícies obtidas foram comparadas com uma superfície do mesmo crânio, obtida por tomografia computorizada (TC). O algoritmo de validação forneceu um mapa de distâncias entre regiões correspondentes nas duas superfícies, que permitiu quantificar a qualidade das superfícies obtidas. Com base no trabalho desenvolvido e nos resultados obtidos, é possível afirmar que foi criada uma base funcional para o varrimento de superfícies 3D de estruturas, apta para desenvolvimento futuro, mostrando que é possível obter alternativas aos métodos comerciais usando poucos recursos financeiros.
Resumo:
To improve surgical safety, and to reduce the mortality and surgical complications incidence, the World Health Organization (WHO) developed the Surgical Safety Checklist (SSC). The SSC is a support of information that aids health professionals to reduce the number of complications, induction of anaesthesia, period before skin incision and period before leaving the operating room (OR). The SSC was tested in several countries of the world and their results shown that after introduction of the SSC the incidence of patient complication lowered from 11.0% to 7.0% (P<0.001), the rate of death declined from 1.5% to 0.8% (P = 0.003) and the nurses recognized that patients identity was more often con rmed (81.6% to 94.2%, P<0.01) in many institutions. Recently the SSC was also implemented in Portuguese hospitals, which led us to its study in the real clinical environment. An observational study was performed: several health professionals were observed and interviewed, to understand the functioning of the SSC in an OR, during the clinical routine. The objective of this study was to understand the current use of the SSC, and how it may be improved in terms of usability, taking advantage of the technological advancements such as mobile applications. During two days were observed 14 surgeries, only 2 surgeries met the requirements for the three phases of the SSC, as de ned by the WHO. Of the remaining 12 observed surgeries, 9 surgeries completed the last phase at the correct time. It was also observed that only in 2 surgeries all the phases of the SSC were read aloud to the team and that, in 7 surgeries, several items were read aloud and answered but no one was checking the SSC, only after the end of the phase. The observational study results disclose that several health professionals do not meet with rules of the WHO manual. This study demonstrates that it is urgent to change the mindset of health professionals, and that di erent features in the SSC may be useful to make it more easy to use. With the results of the observational study, a SSC application proposal was developed with new functionalities to improve and aid the health professional in its use. In this application the user can chose between a SSC already created to a speci c surgery or to create a new SSC, adding and adapting some questions from the WHO standard. To create a new SSC, the application is connected to an online questionnaire builder (JotForm). The choice for this online questionnaire builder went through three essential characteristics: number of types of questions, mainly checkbox, radio button and text; the possibility of to create sections inside sections and the API. In addition, in this proposal the improvements are focused in forcing the user to focus in the work ow of the SSC and to save the input timestamps and any actions made by them. Therefore, the following features was implemented to achieve that goal: display one item of the SSC at a time; display the stage where the SSC is; do not allow going back to the previous step; do not allow going forward to the next item if the current is not lled; do not allow going forward to the next item if the time it took to ll the item was too short and log any action made by the user.
Resumo:
Near real time media content personalisation is nowadays a major challenge involving media content sources, distributors and viewers. This paper describes an approach to seamless recommendation, negotiation and transaction of personalised media content. It adopts an integrated view of the problem by proposing, on the business-to-business (B2B) side, a brokerage platform to negotiate the media items on behalf of the media content distributors and sources, providing viewers, on the business-to-consumer (B2C) side, with a personalised electronic programme guide (EPG) containing the set of recommended items after negotiation. In this setup, when a viewer connects, the distributor looks up and invites sources to negotiate the contents of the viewer personal EPG. The proposed multi-agent brokerage platform is structured in four layers, modelling the registration, service agreement, partner lookup, invitation as well as item recommendation, negotiation and transaction stages of the B2B processes. The recommendation service is a rule-based switch hybrid filter, including six collaborative and two content-based filters. The rule-based system selects, at runtime, the filter(s) to apply as well as the final set of recommendations to present. The filter selection is based on the data available, ranging from the history of items watched to the ratings and/or tags assigned to the items by the viewer. Additionally, this module implements (i) a novel item stereotype to represent newly arrived items, (ii) a standard user stereotype for new users, (iii) a novel passive user tag cloud stereotype for socially passive users, and (iv) a new content-based filter named the collinearity and proximity similarity (CPS). At the end of the paper, we present off-line results and a case study describing how the recommendation service works. The proposed system provides, to our knowledge, an excellent holistic solution to the problem of recommending multimedia contents.
Resumo:
The Internet of Things (IoT) has emerged as a paradigm over the last few years as a result of the tight integration of the computing and the physical world. The requirement of remote sensing makes low-power wireless sensor networks one of the key enabling technologies of IoT. These networks encompass several challenges, especially in communication and networking, due to their inherent constraints of low-power features, deployment in harsh and lossy environments, and limited computing and storage resources. The IPv6 Routing Protocol for Low Power and Lossy Networks (RPL) [1] was proposed by the IETF ROLL (Routing Over Low-power Lossy links) working group and is currently adopted as an IETF standard in the RFC 6550 since March 2012. Although RPL greatly satisfied the requirements of low-power and lossy sensor networks, several issues remain open for improvement and specification, in particular with respect to Quality of Service (QoS) guarantees and support for mobility. In this paper, we focus mainly on the RPL routing protocol. We propose some enhancements to the standard specification in order to provide QoS guarantees for static as well as mobile LLNs. For this purpose, we propose OF-FL (Objective Function based on Fuzzy Logic), a new objective function that overcomes the limitations of the standardized objective functions that were designed for RPL by considering important link and node metrics, namely end-to-end delay, number of hops, ETX (Expected transmission count) and LQL (Link Quality Level). In addition, we present the design of Co-RPL, an extension to RPL based on the corona mechanism that supports mobility in order to overcome the problem of slow reactivity to frequent topology changes and thus providing a better quality of service mainly in dynamic networks application. Performance evaluation results show that both OF-FL and Co-RPL allow a great improvement when compared to the standard specification, mainly in terms of packet loss ratio and average network latency. 2015 Elsevier B.V. Al
Resumo:
The complexity of systems is considered an obstacle to the progress of the IT industry. Autonomic computing is presented as the alternative to cope with the growing complexity. It is a holistic approach, in which the systems are able to configure, heal, optimize, and protect by themselves. Web-based applications are an example of systems where the complexity is high. The number of components, their interoperability, and workload variations are factors that may lead to performance failures or unavailability scenarios. The occurrence of these scenarios affects the revenue and reputation of businesses that rely on these types of applications. In this article, we present a self-healing framework for Web-based applications (SHõWA). SHõWA is composed by several modules, which monitor the application, analyze the data to detect and pinpoint anomalies, and execute recovery actions autonomously. The monitoring is done by a small aspect-oriented programming agent. This agent does not require changes to the application source code and includes adaptive and selective algorithms to regulate the level of monitoring. The anomalies are detected and pinpointed by means of statistical correlation. The data analysis detects changes in the server response time and analyzes if those changes are correlated with the workload or are due to a performance anomaly. In the presence of per- formance anomalies, the data analysis pinpoints the anomaly. Upon the pinpointing of anomalies, SHõWA executes a recovery procedure. We also present a study about the detection and localization of anomalies, the accuracy of the data analysis, and the performance impact induced by SHõWA. Two benchmarking applications, exercised through dynamic workloads, and different types of anomaly were considered in the study. The results reveal that (1) the capacity of SHõWA to detect and pinpoint anomalies while the number of end users affected is low; (2) SHõWA was able to detect anomalies without raising any false alarm; and (3) SHõWA does not induce a significant performance overhead (throughput was affected in less than 1%, and the response time delay was no more than 2 milliseconds).
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertação para obtenção do Grau de Doutor em Matemática
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertação para obtenção do Grau de Doutor em Química
Resumo:
Dissertação para Obtenção do Grau de Mestre em Engenharia e Gestão Industrial