954 resultados para context-aware applications
Resumo:
With wireless vehicular communications, Vehicular Ad Hoc Networks (VANETs) enable numerous applications to enhance traffic safety, traffic efficiency, and driving experience. However, VANETs also impose severe security and privacy challenges which need to be thoroughly investigated. In this dissertation, we enhance the security, privacy, and applications of VANETs, by 1) designing application-driven security and privacy solutions for VANETs, and 2) designing appealing VANET applications with proper security and privacy assurance. First, the security and privacy challenges of VANETs with most application significance are identified and thoroughly investigated. With both theoretical novelty and realistic considerations, these security and privacy schemes are especially appealing to VANETs. Specifically, multi-hop communications in VANETs suffer from packet dropping, packet tampering, and communication failures which have not been satisfyingly tackled in literature. Thus, a lightweight reliable and faithful data packet relaying framework (LEAPER) is proposed to ensure reliable and trustworthy multi-hop communications by enhancing the cooperation of neighboring nodes. Message verification, including both content and signature verification, generally is computation-extensive and incurs severe scalability issues to each node. The resource-aware message verification (RAMV) scheme is proposed to ensure resource-aware, secure, and application-friendly message verification in VANETs. On the other hand, to make VANETs acceptable to the privacy-sensitive users, the identity and location privacy of each node should be properly protected. To this end, a joint privacy and reputation assurance (JPRA) scheme is proposed to synergistically support privacy protection and reputation management by reconciling their inherent conflicting requirements. Besides, the privacy implications of short-time certificates are thoroughly investigated in a short-time certificates-based privacy protection (STCP2) scheme, to make privacy protection in VANETs feasible with short-time certificates. Secondly, three novel solutions, namely VANET-based ambient ad dissemination (VAAD), general-purpose automatic survey (GPAS), and VehicleView, are proposed to support the appealing value-added applications based on VANETs. These solutions all follow practical application models, and an incentive-centered architecture is proposed for each solution to balance the conflicting requirements of the involved entities. Besides, the critical security and privacy challenges of these applications are investigated and addressed with novel solutions. Thus, with proper security and privacy assurance, these solutions show great application significance and economic potentials to VANETs. Thus, by enhancing the security, privacy, and applications of VANETs, this dissertation fills the gap between the existing theoretic research and the realistic implementation of VANETs, facilitating the realistic deployment of VANETs.
Resumo:
Among the various aspects to be investigated for a technological and productive upgrade of tomato greenhouse production in the Mediterranean area, the application of supplementary LED interlighting still shows limited interest. However, high-density tomato cultivation with intensive high-wire systems could lead to mutual shading and consequent reduction in photosynthesis and yield, even in case of appreciable amounts of external solar radiation, as in Southern Europe. Applications of interest could also involve off-season production or Building-Integrated Agriculture (BIA) such as rooftop greenhouses, where municipal regulations for structure and fire safety could limit the incoming radiation in the growing area. The aim of this research was to investigate diversified applications of supplemental LED interlighting for greenhouse tomato production (Solanum lycopersicum) in the Mediterranean countries. The diversified applications included: effects on post-harvest quality, shading reduction in BIA, tailored seedlings production, and off-season cultivation. The results showed that the application of supplemental LED light on greenhouse-grown tomato in Mediterranean countries (Italy and Spain) has potential to foster diverse applications. In particular, it can increase production in case of the limited solar radiation in rooftop greenhouses, maintain quality and reduce losses during post-harvest, help producing high quality and tailored seedlings, and increase yield during wintertime. Despite the positive results obtained, some aspects of the application of additional LED light in Southern Europe countries still need to be deepened and improved. In particular, given the current increase of electricity cost, future research should focus on more economically valuable methods of managing supplemental lighting, such as the application of shorter photoperiods or lower intensities, or techniques that can provide energy savings such as the pulsed light.
Resumo:
Technical evaluation of analytical data is of extreme relevance considering it can be used for comparisons with environmental quality standards and decision-making as related to the management of disposal of dredged sediments and the evaluation of salt and brackish water quality in accordance with CONAMA 357/05 Resolution. It is, therefore, essential that the project manager discusses the environmental agency's technical requirements with the laboratory contracted for the follow-up of the analysis underway and even with a view to possible re-analysis when anomalous data are identified. The main technical requirements are: (1) method quantitation limits (QLs) should fall below environmental standards; (2) analyses should be carried out in laboratories whose analytical scope is accredited by the National Institute of Metrology (INMETRO) or qualified or accepted by a licensing agency; (3) chain of custody should be provided in order to ensure sample traceability; (4) control charts should be provided to prove method performance; (5) certified reference material analysis or, if that is not available, matrix spike analysis, should be undertaken and (6) chromatograms should be included in the analytical report. Within this context and with a view to helping environmental managers in analytical report evaluation, this work has as objectives the discussion of the limitations of the application of SW 846 US EPA methods to marine samples, the consequences of having data based on method detection limits (MDL) and not sample quantitation limits (SQL), and present possible modifications of the principal method applied by laboratories in order to comply with environmental quality standards.
Resumo:
Context tree models have been introduced by Rissanen in [25] as a parsimonious generalization of Markov models. Since then, they have been widely used in applied probability and statistics. The present paper investigates non-asymptotic properties of two popular procedures of context tree estimation: Rissanen's algorithm Context and penalized maximum likelihood. First showing how they are related, we prove finite horizon bounds for the probability of over- and under-estimation. Concerning overestimation, no boundedness or loss-of-memory conditions are required: the proof relies on new deviation inequalities for empirical probabilities of independent interest. The under-estimation properties rely on classical hypotheses for processes of infinite memory. These results improve on and generalize the bounds obtained in Duarte et al. (2006) [12], Galves et al. (2008) [18], Galves and Leonardi (2008) [17], Leonardi (2010) [22], refining asymptotic results of Buhlmann and Wyner (1999) [4] and Csiszar and Talata (2006) [9]. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Efficient automatic protein classification is of central importance in genomic annotation. As an independent way to check the reliability of the classification, we propose a statistical approach to test if two sets of protein domain sequences coming from two families of the Pfam database are significantly different. We model protein sequences as realizations of Variable Length Markov Chains (VLMC) and we use the context trees as a signature of each protein family. Our approach is based on a Kolmogorov-Smirnov-type goodness-of-fit test proposed by Balding et at. [Limit theorems for sequences of random trees (2008), DOI: 10.1007/s11749-008-0092-z]. The test statistic is a supremum over the space of trees of a function of the two samples; its computation grows, in principle, exponentially fast with the maximal number of nodes of the potential trees. We show how to transform this problem into a max-flow over a related graph which can be solved using a Ford-Fulkerson algorithm in polynomial time on that number. We apply the test to 10 randomly chosen protein domain families from the seed of Pfam-A database (high quality, manually curated families). The test shows that the distributions of context trees coming from different families are significantly different. We emphasize that this is a novel mathematical approach to validate the automatic clustering of sequences in any context. We also study the performance of the test via simulations on Galton-Watson related processes.
Resumo:
In recent years, magnetic nanoparticles have been studied due to their potential applications as magnetic carriers in biomedical area. These materials have been increasingly exploited as efficient delivery vectors, leading to opportunities of use as magnetic resonance imaging (MRI) agents, mediators of hyperthermia cancer treatment and in targeted therapies. Much attention has been also focused on ""smart"" polymers, which are able to respond to environmental changes, such as changes in the temperature and pH. In this context, this article reviews the state-of-the art in stimuli-responsive magnetic systems for biomedical applications. The paper describes different types of stimuli-sensitive systems, mainly temperature- and pH sensitive polymers, the combination of this characteristic with magnetic properties and, finally, it gives an account of their preparation methods. The article also discusses the main in vivo biomedical applications of such materials. A survey of the recent literature on various stimuli-responsive magnetic gels in biomedical applications is also included. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper surveys current work on the design of alarms for anesthesia environments and notes some of the problems arising from the need to interpret alarms in context. Anesthetists' responses to audible alarms in the operating room were observed across four types of surgical procedure (laparoscopic, arthroscopic, cardiac, and intracranial) and across three phases of a procedure (induction, maintenance, and emergence). Alarms were classified as (a) requiring a corrective response, (b) being the intended result of a decision, (c) being ignored as a nuisance alarm, or (d) functioning as a reminder. Results revealed strong effects of the type of procedure and phase of procedure on the number and rate of audible alarms. Some alarms were relatively confined to specific phases; others were seen across phases, and responses differed according to phase. These results were interpreted in light of their significance for the development of effective alarm systems. Actual or potential applications of this research include the design of alarm systems that are more informative and more sensitive to operative context than are current systems.
Resumo:
Some patients are no longer able to communicate effectively or even interact with the outside world in ways that most of us take for granted. In the most severe cases, tetraplegic or post-stroke patients are literally `locked in` their bodies, unable to exert any motor control after, for example, a spinal cord injury or a brainstem stroke, requiring alternative methods of communication and control. But we suggest that, in the near future, their brains may offer them a way out. Non-invasive electroencephalogram (EEG)-based brain-computer interfaces (BCD can be characterized by the technique used to measure brain activity and by the way that different brain signals are translated into commands that control an effector (e.g., controlling a computer cursor for word processing and accessing the internet). This review focuses on the basic concepts of EEG-based BC!, the main advances in communication, motor control restoration and the down-regulation of cortical activity, and the mirror neuron system (MNS) in the context of BCI. The latter appears to be relevant for clinical applications in the coming years, particularly for severely limited patients. Hypothetically, MNS could provide a robust way to map neural activity to behavior, representing the high-level information about goals and intentions of these patients. Non-invasive EEG-based BCIs allow brain-derived communication in patients with amyotrophic lateral sclerosis and motor control restoration in patients after spinal cord injury and stroke. Epilepsy and attention deficit and hyperactive disorder patients were able to down-regulate their cortical activity. Given the rapid progression of EEG-based BCI research over the last few years and the swift ascent of computer processing speeds and signal analysis techniques, we suggest that emerging ideas (e.g., MNS in the context of BC!) related to clinical neuro-rehabilitation of severely limited patients will generate viable clinical applications in the near future.
Resumo:
Following the application of the remember/know paradigm to student learning by Conway et al. (1997), this study examined changes in learning and memory awareness of university students in a lecture course and a research methods course. The proposed shift from a dominance of 'remember' awareness in early learning to a dominance of 'know' awareness as learning progresses and schematization occurs was evident for the methods course but not for the lecture course. The patterns of remember and know awareness and proposed associated levels of schematization were supported by a separate measure of the quality of student learning using the SOLO (Structure of Observed Learning Outcomes) Taxonomy. As found by previous research, the remember-to-know shift and schematization of knowledge is dependent upon type of course and level of achievement. Findings are discussed in terms of the utility of the methodology used, the theoretical implications and the applications to educational practice. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
This poster, which is the result of an ongoing PhD thesis project, illustrates how and why the Intellectual Capital (IC) concept can be applied to a seaport. As far as the authors are aware, most of the research in IC has been focused on individual firms. Although some recent papers examine macro-level organizations, such as regions, none exist on seaports. Also, there is a lack of research related to the ways IC is created and maintained as a dynamic process. In addition, there is a paucity of management sciences’ research on to maritime transportation and seaports. Several research questions are thus pertinent, whose answers can have important strategic and managerial implications for the seaport and its stakeholders.
Resumo:
Power system planning, control and operation require an adequate use of existing resources as to increase system efficiency. The use of optimal solutions in power systems allows huge savings stressing the need of adequate optimization and control methods. These must be able to solve the envisaged optimization problems in time scales compatible with operational requirements. Power systems are complex, uncertain and changing environments that make the use of traditional optimization methodologies impracticable in most real situations. Computational intelligence methods present good characteristics to address this kind of problems and have already proved to be efficient for very diverse power system optimization problems. Evolutionary computation, fuzzy systems, swarm intelligence, artificial immune systems, neural networks, and hybrid approaches are presently seen as the most adequate methodologies to address several planning, control and operation problems in power systems. Future power systems, with intensive use of distributed generation and electricity market liberalization increase power systems complexity and bring huge challenges to the forefront of the power industry. Decentralized intelligence and decision making requires more effective optimization and control techniques techniques so that the involved players can make the most adequate use of existing resources in the new context. The application of computational intelligence methods to deal with several problems of future power systems is presented in this chapter. Four different applications are presented to illustrate the promises of computational intelligence, and illustrate their potentials.
Resumo:
Presently power system operation produces huge volumes of data that is still treated in a very limited way. Knowledge discovery and machine learning can make use of these data resulting in relevant knowledge with very positive impact. In the context of competitive electricity markets these data is of even higher value making clear the trend to make data mining techniques application in power systems more relevant. This paper presents two cases based on real data, showing the importance of the use of data mining for supporting demand response and for supporting player strategic behavior.
Resumo:
Cloud computing is increasingly being adopted in different scenarios, like social networking, business applications, scientific experiments, etc. Relying in virtualization technology, the construction of these computing environments targets improvements in the infrastructure, such as power-efficiency and fulfillment of users’ SLA specifications. The methodology usually applied is packing all the virtual machines on the proper physical servers. However, failure occurrences in these networked computing systems can induce substantial negative impact on system performance, deviating the system from ours initial objectives. In this work, we propose adapted algorithms to dynamically map virtual machines to physical hosts, in order to improve cloud infrastructure power-efficiency, with low impact on users’ required performance. Our decision making algorithms leverage proactive fault-tolerance techniques to deal with systems failures, allied with virtual machine technology to share nodes resources in an accurately and controlled manner. The results indicate that our algorithms perform better targeting power-efficiency and SLA fulfillment, in face of cloud infrastructure failures.
Resumo:
Aims: This paper aims to address some of the main possible applications of actual Nuclear Medicine Imaging techniques and methodologies in the specific context of Sports Medicine, namely in two critical systems: musculoskeletal and cardiovascular. Discussion: At the musculoskeletal level, bone scintigraphy techniques proved to be a mean of diagnosis of functional orientation and high sensibility compared with other morphological imaging techniques in the detection and temporal evaluation of pathological situations, for instance allowing the acquisition of information of great relevance in athletes with stress fractures. On the other hand, infection/inflammation studies might be of an important added value to characterize specific situations, early diagnose of potential critical issues – so giving opportunity to precise, complete and fast solutions – while allowing the evaluation and eventual optimization of training programs. At cardiovascular system level, Nuclear Medicine had proved to be crucial in differential diagnosis between cardiac hypertrophy secondary to physical activity (the so called "athlete's heart") and hypertrophic cardiomyopathy, in the diagnosis and prognosis of changes in cardiac function in athletes, as well as in direct - and non-invasive - in vivo visualization of sympathetic cardiac innervation, something that seems to take more and more importance nowadays, namely in order to try to avoid sudden death episodes at intense physical effort. Also the clinical application of Positron Emission Tomography (PET) has becoming more and more widely recognized as promising. Conclusions: It has been concluded that Nuclear Medicine can become an important application in Sports Medicine. Its well established capabilities to early detection of processes involving functional properties allied to its high sensibility and the actual technical possibilities (namely those related with hybrid imaging, that allows to add information provided by high resolution morphological imaging techniques, such as CT and/or MRI) make it a powerful diagnostic tool, claiming to be used on an each day higher range of clinical applications related with all levels of sport activities. Since the improvements at equipment characteristics and detection levels allows the use of smaller and smaller doses, so minimizing radiation exposure it is believed by the authors that the increase of the use of NM tools in the Sports Medicine area should be considered.
Resumo:
Os sistemas de tempo real modernos geram, cada vez mais, cargas computacionais pesadas e dinâmicas, começando-se a tornar pouco expectável que sejam implementados em sistemas uniprocessador. Na verdade, a mudança de sistemas com um único processador para sistemas multi- processador pode ser vista, tanto no domínio geral, como no de sistemas embebidos, como uma forma eficiente, em termos energéticos, de melhorar a performance das aplicações. Simultaneamente, a proliferação das plataformas multi-processador transformaram a programação paralela num tópico de elevado interesse, levando o paralelismo dinâmico a ganhar rapidamente popularidade como um modelo de programação. A ideia, por detrás deste modelo, é encorajar os programadores a exporem todas as oportunidades de paralelismo através da simples indicação de potenciais regiões paralelas dentro das aplicações. Todas estas anotações são encaradas pelo sistema unicamente como sugestões, podendo estas serem ignoradas e substituídas, por construtores sequenciais equivalentes, pela própria linguagem. Assim, o modo como a computação é na realidade subdividida, e mapeada nos vários processadores, é da responsabilidade do compilador e do sistema computacional subjacente. Ao retirar este fardo do programador, a complexidade da programação é consideravelmente reduzida, o que normalmente se traduz num aumento de produtividade. Todavia, se o mecanismo de escalonamento subjacente não for simples e rápido, de modo a manter o overhead geral em níveis reduzidos, os benefícios da geração de um paralelismo com uma granularidade tão fina serão meramente hipotéticos. Nesta perspetiva de escalonamento, os algoritmos que empregam uma política de workstealing são cada vez mais populares, com uma eficiência comprovada em termos de tempo, espaço e necessidades de comunicação. Contudo, estes algoritmos não contemplam restrições temporais, nem outra qualquer forma de atribuição de prioridades às tarefas, o que impossibilita que sejam diretamente aplicados a sistemas de tempo real. Além disso, são tradicionalmente implementados no runtime da linguagem, criando assim um sistema de escalonamento com dois níveis, onde a previsibilidade, essencial a um sistema de tempo real, não pode ser assegurada. Nesta tese, é descrita a forma como a abordagem de work-stealing pode ser resenhada para cumprir os requisitos de tempo real, mantendo, ao mesmo tempo, os seus princípios fundamentais que tão bons resultados têm demonstrado. Muito resumidamente, a única fila de gestão de processos convencional (deque) é substituída por uma fila de deques, ordenada de forma crescente por prioridade das tarefas. De seguida, aplicamos por cima o conhecido algoritmo de escalonamento dinâmico G-EDF, misturamos as regras de ambos, e assim nasce a nossa proposta: o algoritmo de escalonamento RTWS. Tirando partido da modularidade oferecida pelo escalonador do Linux, o RTWS é adicionado como uma nova classe de escalonamento, de forma a avaliar na prática se o algoritmo proposto é viável, ou seja, se garante a eficiência e escalonabilidade desejadas. Modificar o núcleo do Linux é uma tarefa complicada, devido à complexidade das suas funções internas e às fortes interdependências entre os vários subsistemas. Não obstante, um dos objetivos desta tese era ter a certeza que o RTWS é mais do que um conceito interessante. Assim, uma parte significativa deste documento é dedicada à discussão sobre a implementação do RTWS e à exposição de situações problemáticas, muitas delas não consideradas em teoria, como é o caso do desfasamento entre vários mecanismo de sincronização. Os resultados experimentais mostram que o RTWS, em comparação com outro trabalho prático de escalonamento dinâmico de tarefas com restrições temporais, reduz significativamente o overhead de escalonamento através de um controlo de migrações, e mudanças de contexto, eficiente e escalável (pelo menos até 8 CPUs), ao mesmo tempo que alcança um bom balanceamento dinâmico da carga do sistema, até mesmo de uma forma não custosa. Contudo, durante a avaliação realizada foi detetada uma falha na implementação do RTWS, pela forma como facilmente desiste de roubar trabalho, o que origina períodos de inatividade, no CPU em questão, quando a utilização geral do sistema é baixa. Embora o trabalho realizado se tenha focado em manter o custo de escalonamento baixo e em alcançar boa localidade dos dados, a escalonabilidade do sistema nunca foi negligenciada. Na verdade, o algoritmo de escalonamento proposto provou ser bastante robusto, não falhando qualquer meta temporal nas experiências realizadas. Portanto, podemos afirmar que alguma inversão de prioridades, causada pela sub-política de roubo BAS, não compromete os objetivos de escalonabilidade, e até ajuda a reduzir a contenção nas estruturas de dados. Mesmo assim, o RTWS também suporta uma sub-política de roubo determinística: PAS. A avaliação experimental, porém, não ajudou a ter uma noção clara do impacto de uma e de outra. No entanto, de uma maneira geral, podemos concluir que o RTWS é uma solução promissora para um escalonamento eficiente de tarefas paralelas com restrições temporais.