925 resultados para Online Systems
Resumo:
This paper present a technique based on genetic algorithms for generating online adaptive services. Online adaptive systems provide flexible services to a mass of clients/users for maximising some system goals, they dynamically adapt the form and the content of the issued services while the population of clients evolve over time. The idea of online genetic algorithms (online GAs) is to use the online clients response behaviour as a fitness function in order to produce the next generation of services. The principle implemented in online GAs, “the application environment is the fitness”, allow modelling highly evolutionary domains where both services providers and clients change and evolve over time. The flexibility and the adaptive behaviour of this approach seems to be very relevant and promising for applications characterised by highly dynamical features such as in the web domain (online newspapers, e- markets, websites and advertising engines). Nevertheless the proposed technique has a more general aim for application environments characterised by a massive number of anonymous clients/users which require personalised services, such as in the case of many new IT applications.
Resumo:
In the paper learning algorithm for adjusting weight coefficients of the Cascade Neo-Fuzzy Neural Network (CNFNN) in sequential mode is introduced. Concerned architecture has the similar structure with the Cascade-Correlation Learning Architecture proposed by S.E. Fahlman and C. Lebiere, but differs from it in type of artificial neurons. CNFNN consists of neo-fuzzy neurons, which can be adjusted using high-speed linear learning procedures. Proposed CNFNN is characterized by high learning rate, low size of learning sample and its operations can be described by fuzzy linguistic “if-then” rules providing “transparency” of received results, as compared with conventional neural networks. Using of online learning algorithm allows to process input data sequentially in real time mode.
Resumo:
Editorial: The 2015 BCLA annual conference was another fantastic affair. It was the first time the conference was held in the beautiful city of Liverpool. The venue was great and the programme was excellent. The venue overlooked the River Mersey and many of the hotels were local boutique hotels. I stayed in one which was formerly the offices of White Star Liners—where the RMS Titanic was originally registered. The hotel decor was consistent with its historic significance. The BCLA gala dinner was held in the hugely impressive Anglican Cathedral with entertainment from a Beatles tribute band. That will certainly be a hard act to follow at the next conference in 2017. Brian Tompkins took the reigns as the new BCLA president. Professor Fiona Stapleton was the recipient of the BCLA Gold Medal Award. The winner of the poster competition was Dorota Szczesna-Iskander with a poster entitled ‘Dry Contact lens poor wettability and visual performance’. Second place was Renee Reeder with her poster entitled ‘Abnormal Rosacea as a differential diagnosis in corneal scarring’. And third place was Maria Jesus Gonzalez-Garcia with her poster entitled ‘Dry Effect of the Environmental Conditions on Tear Inflammatory Mediators Concentration in Contact Lens Wearers’. The photographic competition winner was Professor Wolfgang Sickenberger from Jena in Germany. The Editorial Panel of CLAE met at the BCLA conference for their first biannual meeting. The journal metrics were discussed. In terms of number of submissions of new papers CLAE seems to have plateaued after seeing a rapid growth in the number of submissions over the last few years. The increase over the last few years could be attributed to the fact that CLAE was awarded an impact factor for the first time in 2012. This year it seems that impact factors across nearly all ophthalmic related journals has dropped. This could in part be due to the fact that last year was a ‘Research Exercise Framework (REF) year for UK universities, where they are judged on quality of their research output. The next REF is in 2020 so we may see changes nearing that time. Looking at article downloads, there seems to be a continued rise in figures. Currently CLAE attracts around 85,000 downloads per year (this is an increase of around 10,000 per year for the last few years) and the 2015 prediction is 120,000! With this in mind and with other contributing factors too, the BCLA has decided to move to online delivery of CLAE to its members starting from issue 5 of 2015. Some members do like to flick through the pages of a hard copy of the journal so members will still have the option of receiving a hard copy through the post but the default journal delivery method will now be online. The BCLA office will send various alerts and content details to members email addresses. To access CLAE online you will need to log in via the BCLA web page, currently you then click on ‘Resources’ and then under ‘Free and Discounted Publications’ you will see CLAE. This actually takes you to CLAE’s own webpage (www.contactlensjournal.com) but you need to log in via the BCLA web page. The BCLA plans to change these weblinks so that from the BCLA web page you can link to the journal website much more easily and you have the choice of going directly into the general website for CLAE or straight to the current issue. In 2016 you will see an even easier way of accessing CLAE online as the BCLA will launch a CLAE application for mobile devices where the journal can be downloaded as a ‘flick-book’. This is a great way of bringing CLAE into the modern era where people access their information in newer ways. For many the BCLA conference was part of a very busy conference week as it was preceded by the International Association of Contact Lens Educators’ (IACLE) Third World Congress, held in Manchester on the 4 days before the BCLA conference. The first and second IACE World Congresses were held in Waterloo, Canada in 1994 and 2000 respectively and hosted by Professor Des Fonn. Professor Fonn was the recipient of the first ever IACLE lifetime achievement award. The Third IACLE World Congress saw more than 100 contact lens educators and industry representatives from around 30 countries gather in the UK for the four-day event, hosted by The University of Manchester. Delegates gained hands-on experience of innovations in teaching, such as learning delivery systems, the use of iPads in the classroom and for creating ePub content, and augmented and virtual reality technologies. IACLE members around the world also took part via a live online broadcast. The Third IACLE World Congress was made possible by the generous support of Sponsors Alcon, CooperVision and Johnson & Johnson Vision Care., for more information look at the IACLE web page (www.iacle.org).
Resumo:
The Everglades Online Thesaurus is a structured vocabulary of concepts and terms relating to the south Florida environment. Designed as an information management tool for both researchers and metadata creators, the Thesaurus is intended to improve information retrieval across the many disparate information systems, databases, and web sites that provide Everglades-related information. The vocabulary provided by the Everglades Online Thesaurus expresses each relevant concept using a single ‘preferred term’, whereas in natural language many terms may exist to express that same concept. In this way, the Thesaurus offers the possibility of standardizing the terminology used to describe Everglades-related information — an important factor in predictable and successful resource discovery.
Resumo:
This study evaluates applicability of E-service quality measurements in the context of online hotel bookings. Data was collected from an online survey of undergraduate college students at two universities in the United States. The Transaction Process-based Framework (eTransQual) conceptualized by Bauer et al. (2006) was adapted, and the dimensionality of e-service quality was identified. The study identified process/reliability as the most important factor influencing overall quality of booking websites.
Resumo:
The promise of Wireless Sensor Networks (WSNs) is the autonomous collaboration of a collection of sensors to accomplish some specific goals which a single sensor cannot offer. Basically, sensor networking serves a range of applications by providing the raw data as fundamentals for further analyses and actions. The imprecision of the collected data could tremendously mislead the decision-making process of sensor-based applications, resulting in an ineffectiveness or failure of the application objectives. Due to inherent WSN characteristics normally spoiling the raw sensor readings, many research efforts attempt to improve the accuracy of the corrupted or "dirty" sensor data. The dirty data need to be cleaned or corrected. However, the developed data cleaning solutions restrict themselves to the scope of static WSNs where deployed sensors would rarely move during the operation. Nowadays, many emerging applications relying on WSNs need the sensor mobility to enhance the application efficiency and usage flexibility. The location of deployed sensors needs to be dynamic. Also, each sensor would independently function and contribute its resources. Sensors equipped with vehicles for monitoring the traffic condition could be depicted as one of the prospective examples. The sensor mobility causes a transient in network topology and correlation among sensor streams. Based on static relationships among sensors, the existing methods for cleaning sensor data in static WSNs are invalid in such mobile scenarios. Therefore, a solution of data cleaning that considers the sensor movements is actively needed. This dissertation aims to improve the quality of sensor data by considering the consequences of various trajectory relationships of autonomous mobile sensors in the system. First of all, we address the dynamic network topology due to sensor mobility. The concept of virtual sensor is presented and used for spatio-temporal selection of neighboring sensors to help in cleaning sensor data streams. This method is one of the first methods to clean data in mobile sensor environments. We also study the mobility pattern of moving sensors relative to boundaries of sub-areas of interest. We developed a belief-based analysis to determine the reliable sets of neighboring sensors to improve the cleaning performance, especially when node density is relatively low. Finally, we design a novel sketch-based technique to clean data from internal sensors where spatio-temporal relationships among sensors cannot lead to the data correlations among sensor streams.
Resumo:
This poster presentation from the May 2015 Florida Library Association Conference, along with the Everglades Explorer discovery portal at http://ee.fiu.edu, demonstrates how traditional bibliographic and curatorial principles can be applied to: 1) selection, cross-walking and aggregation of metadata linking end-users to wide-spread digital resources from multiple silos; 2) harvesting of select PDFs, HTML and media for web archiving and access; 3) selection of CMS domains, sub-domains and folders for targeted searching using an API. Choosing content for this discovery portal is comparable to past scholarly practice of creating and publishing subject bibliographies, except metadata and data are housed in relational databases. This new and yet traditional capacity coincides with: Growth of bibliographic utilities (MarcEdit); Evolution of open-source discovery systems (eXtensible Catalog); Development of target-capable web crawling and archiving systems (Archive-it); and specialized search APIs (Google). At the same time, historical and technical changes – specifically the increasing fluidity and re-purposing of syndicated metadata – make this possible. It equally stems from the expansion of freely accessible digitized legacy and born-digital resources. Innovation principles helped frame the process by which the thematic Everglades discovery portal was created at Florida International University. The path -- to providing for more effective searching and co-location of digital scientific, educational and historical material related to the Everglades -- is contextualized through five concepts found within Dyer and Christensen’s “The Innovator’s DNA: Mastering the five skills of disruptive innovators (2011). The project also aligns with Ranganathan’s Laws of Library Science, especially the 4th Law -- to "save the time of the user.”
Resumo:
Computational Intelligence Methods have been expanding to industrial applications motivated by their ability to solve problems in engineering. Therefore, the embedded systems follow the same idea of using computational intelligence tools embedded on machines. There are several works in the area of embedded systems and intelligent systems. However, there are a few papers that have joined both areas. The aim of this study was to implement an adaptive fuzzy neural hardware with online training embedded on Field Programmable Gate Array – FPGA. The system adaptation can occur during the execution of a given application, aiming online performance improvement. The proposed system architecture is modular, allowing different configurations of fuzzy neural network topologies with online training. The proposed system was applied to: mathematical function interpolation, pattern classification and selfcompensation of industrial sensors. The proposed system achieves satisfactory performance in both tasks. The experiments results shows the advantages and disadvantages of online training in hardware when performed in parallel and sequentially ways. The sequentially training method provides economy in FPGA area, however, increases the complexity of architecture actions. The parallel training method achieves high performance and reduced processing time, the pipeline technique is used to increase the proposed architecture performance. The study development was based on available tools for FPGA circuits.
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
Peer reviewed
Resumo:
The presence of high phase noise in addition to additive white Gaussian noise in coherent optical systems affects the performance of forward error correction (FEC) schemes. In this paper, we propose a simple scheme for such systems, using block interleavers and binary Bose–Chaudhuri–Hocquenghem (BCH) codes. The block interleavers are specifically optimized for differential quadrature phase shift keying modulation. We propose a method for selecting BCH codes that, together with the interleavers, achieve a target post-FEC bit error rate (BER). This combination of interleavers and BCH codes has very low implementation complexity. In addition, our approach is straightforward, requiring only short pre-FEC simulations to parameterize a model, based on which we select codes analytically. We aim to correct a pre-FEC BER of around (Formula presented.). We evaluate the accuracy of our approach using numerical simulations. For a target post-FEC BER of (Formula presented.), codes selected using our method result in BERs around 3(Formula presented.) target and achieve the target with around 0.2 dB extra signal-to-noise ratio.
Resumo:
Conventional reliability models for parallel systems are not applicable for the analysis of parallel systems with load transfer and sharing. In this short communication, firstly, the dependent failures of parallel systems are analyzed, and the reliability model of load-sharing parallel system is presented based on Miner cumulative damage theory and the full probability formula. Secondly, the parallel system reliability is calculated by Monte Carlo simulation when the component life follows the Weibull distribution. The research result shows that the proposed reliability mathematical model could analyze and evaluate the reliability of parallel systems in the presence of load transfer.
Resumo:
Brain-computer interfaces (BCI) have the potential to restore communication or control abilities in individuals with severe neuromuscular limitations, such as those with amyotrophic lateral sclerosis (ALS). The role of a BCI is to extract and decode relevant information that conveys a user's intent directly from brain electro-physiological signals and translate this information into executable commands to control external devices. However, the BCI decision-making process is error-prone due to noisy electro-physiological data, representing the classic problem of efficiently transmitting and receiving information via a noisy communication channel.
This research focuses on P300-based BCIs which rely predominantly on event-related potentials (ERP) that are elicited as a function of a user's uncertainty regarding stimulus events, in either an acoustic or a visual oddball recognition task. The P300-based BCI system enables users to communicate messages from a set of choices by selecting a target character or icon that conveys a desired intent or action. P300-based BCIs have been widely researched as a communication alternative, especially in individuals with ALS who represent a target BCI user population. For the P300-based BCI, repeated data measurements are required to enhance the low signal-to-noise ratio of the elicited ERPs embedded in electroencephalography (EEG) data, in order to improve the accuracy of the target character estimation process. As a result, BCIs have relatively slower speeds when compared to other commercial assistive communication devices, and this limits BCI adoption by their target user population. The goal of this research is to develop algorithms that take into account the physical limitations of the target BCI population to improve the efficiency of ERP-based spellers for real-world communication.
In this work, it is hypothesised that building adaptive capabilities into the BCI framework can potentially give the BCI system the flexibility to improve performance by adjusting system parameters in response to changing user inputs. The research in this work addresses three potential areas for improvement within the P300 speller framework: information optimisation, target character estimation and error correction. The visual interface and its operation control the method by which the ERPs are elicited through the presentation of stimulus events. The parameters of the stimulus presentation paradigm can be modified to modulate and enhance the elicited ERPs. A new stimulus presentation paradigm is developed in order to maximise the information content that is presented to the user by tuning stimulus paradigm parameters to positively affect performance. Internally, the BCI system determines the amount of data to collect and the method by which these data are processed to estimate the user's target character. Algorithms that exploit language information are developed to enhance the target character estimation process and to correct erroneous BCI selections. In addition, a new model-based method to predict BCI performance is developed, an approach which is independent of stimulus presentation paradigm and accounts for dynamic data collection. The studies presented in this work provide evidence that the proposed methods for incorporating adaptive strategies in the three areas have the potential to significantly improve BCI communication rates, and the proposed method for predicting BCI performance provides a reliable means to pre-assess BCI performance without extensive online testing.
Resumo:
With the increasing attention towards the role of information systems (IS) as a vehicle to address environmental issues, IS researchers and practitioners have strived to leverage advanced Green IS innovations to persuade people to engage in environmentally responsible practices and support pro-environmental initiatives. Yet, existing research reveals that the persuasion effects of Green IS designs remain equivocal. In particular, many design characteristics advocated in Green IS research can produce bi-directional changes in IS users’ attitudes and behaviours. To address this issue, this thesis drew upon the circumplex model of social values (S.H. Schwartz, 1992) to explain when and how online persuasion designs come to affect people’s judgements on resource conservation and environmental protection. Three sets of working propositions and specific hypotheses were developed. Specifically, this research suggests that the use of an IS application can elicit different value primes and draw IS users’ attentions to different motivational functions of engaging in suggested behavioural changes. It is expected that matching online persuasion appeals with IS users’ personal value priorities can increase users’ acceptance of online behavioural suggestions. Second, it is hypothesized that the persuasion effect tends to be weakened, as the system users become aware of the valuematching design in a given IS application. Third, it is proposed that different value primes presented in an IS application can result in different unintended effects on IS users’ global pro-environmental attitudes and motivations. The hypotheses were tested in the two pilot studies and two full-scale online experiments. The study findings generally support the main predictions of the hypotheses. On the one hand, this thesis providesiii empirical evidence that IS design for online persuasion can be instrumental in influencing IS users’ judgements on a range of resource conservation practices. On the other hand, this work explains why the effectiveness of IS-enabled online persuasion attempts needs to be measured not only in terms of the intended changes in a target behavioural domain but also in terms of unintended changes in people’s general environmental orientations. Findings in this research may bring a different perspective on understanding and assessing the influence of Green IS applications on IS users’ judgements and behaviou