940 resultados para real-time capability
Resumo:
We report on advanced dual-wavelength digital holographic microscopy (DHM) methods, enabling single-acquisition real-time micron-range measurements while maintaining single-wavelength interferometric resolution in the nanometer regime. In top of the unique real-time capability of our technique, it is shown that axial resolution can be further increased compared to single-wavelength operation thanks to the uncorrelated nature of both recorded wavefronts. It is experimentally demonstrated that DHM topographic investigation within 3 decades measurement range can be achieved with our arrangement, opening new applications possibilities for this interferometric technique. ©2008 COPYRIGHT SPIE
Resumo:
Recently a new recipe for developing and deploying real-time systems has become increasingly adopted in the JET tokamak. Powered by the advent of x86 multi-core technology and the reliability of the JET’s well established Real-Time Data Network (RTDN) to handle all real-time I/O, an official Linux vanilla kernel has been demonstrated to be able to provide realtime performance to user-space applications that are required to meet stringent timing constraints. In particular, a careful rearrangement of the Interrupt ReQuests’ (IRQs) affinities together with the kernel’s CPU isolation mechanism allows to obtain either soft or hard real-time behavior depending on the synchronization mechanism adopted. Finally, the Multithreaded Application Real-Time executor (MARTe) framework is used for building applications particularly optimised for exploring multicore architectures. In the past year, four new systems based on this philosophy have been installed and are now part of the JET’s routine operation. The focus of the present work is on the configuration and interconnection of the ingredients that enable these new systems’ real-time capability and on the impact that JET’s distributed real-time architecture has on system engineering requirements, such as algorithm testing and plant commissioning. Details are given about the common real-time configuration and development path of these systems, followed by a brief description of each system together with results regarding their real-time performance. A cycle time jitter analysis of a user-space MARTe based application synchronising over a network is also presented. The goal is to compare its deterministic performance while running on a vanilla and on a Messaging Real time Grid (MRG) Linux kernel.
Resumo:
Os serviços baseados em localização vieram dar um novo alento à criatividade dos programadores de aplicações móveis. A vulgarização de dispositivos com capacidades de localização integradas deu origem ao desenvolvimento de aplicações que gerem e apresentam informação baseada na posição do utilizador. Desde então, o mercado móvel tem assistido ao aparecimento de novas categorias de aplicações que tiram proveito desta capacidade. Entre elas, destaca-se a monitorização remota de dispositivos, que tem vindo a assumir uma importância crescente, tanto no sector particular como no sector empresarial. Esta dissertação começa por apresentar o estado da arte sobre os diferentes sistemas de posicionamento, categorizados pela sua eficácia em ambientes internos ou externos, assim como diferentes protocolos de comunicação em tempo quase-real. É também feita uma análise ao estado actual do mercado móvel. Actualmente o mercado possui diferentes plataformas móveis com características únicas que as fazem rivalizar entre si, com vista a expandirem a sua quota de mercado. É por isso elaborado um breve estudo sobre os sistemas operativos móveis mais relevantes da actualidade. É igualmente feita uma abordagem mais profunda à arquitectura da plataforma móvel da Apple - o iOS – que serviu de base ao desenvolvimento de uma solução optimizada para localização e monitorização de dispositivos móveis. A monitorização implica uma utilização intensiva de recursos energéticos e de largura de banda que os dispositivos móveis da actualidade não estão aptos a suportar. Dado o grande consumo energético do GPS face à precária autonomia destes dispositivos, é apresentado um estudo em que se expõem soluções que permitem gerir de forma optimizada a utilização do GPS. O elevado custo dos planos de dados facultados pelas operadoras móveis é também considerado, pelo que são exploradas soluções que visam minimizar a utilização de largura de banda. Deste trabalho, nasce a aplicação EyeGotcha, que para além de permitir localizar outros utilizadores de dispositivos móveis de forma optimizada, permite também monitorizar as suas acções baseando-se num conjunto de regras pré-definidas. Estas acções são reportadas às entidades monitoras, de modo automatizado e sob a forma de alertas. Visionando-se a comercialização da aplicação, é portanto apresentado um modelo de negócio que permite obter receitas capazes de cobrirem os custos de manutenção de serviços, aos quais o funcionamento da aplicação móvel está subjugado.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
The purpose of this study was to evaluate the determinism of the AS-lnterface network and the 3 main families of control systems, which may use it, namely PLC, PC and RTOS. During the course of this study the PROFIBUS and Ethernet field level networks were also considered in order to ensure that they would not introduce unacceptable latencies into the overall control system. This research demonstrated that an incorrectly configured Ethernet network introduces unacceptable variable duration latencies into the control system, thus care must be exercised if the determinism of a control system is not to be compromised. This study introduces a new concept of using statistics and process capability metrics in the form of CPk values, to specify how suitable a control system is for a given control task. The PLC systems, which were tested, demonstrated extremely deterministic responses, but when a large number of iterations were introduced in the user program, the mean control system latency was much too great for an AS-I network. Thus the PLC was found to be unsuitable for an AS-I network if a large, complex user program Is required. The PC systems, which were tested were non-deterministic and had latencies of variable duration. These latencies became extremely exaggerated when a graphing ActiveX was included in the control application. These PC systems also exhibited a non-normal frequency distribution of control system latencies, and as such are unsuitable for implementation with an AS-I network. The RTOS system, which was tested, overcame the problems identified with the PLC systems and produced an extremely deterministic response, even when a large number of iterations were introduced in the user program. The RTOS system, which was tested, is capable of providing a suitable deterministic control system response, even when an extremely large, complex user program is required.
Resumo:
In this report, a face recognition system that is capable of detecting and recognizing frontal and rotated faces was developed. Two face recognition methods focusing on the aspect of pose invariance are presented and evaluated - the whole face approach and the component-based approach. The main challenge of this project is to develop a system that is able to identify faces under different viewing angles in realtime. The development of such a system will enhance the capability and robustness of current face recognition technology. The whole-face approach recognizes faces by classifying a single feature vector consisting of the gray values of the whole face image. The component-based approach first locates the facial components and extracts them. These components are normalized and combined into a single feature vector for classification. The Support Vector Machine (SVM) is used as the classifier for both approaches. Extensive tests with respect to the robustness against pose changes are performed on a database that includes faces rotated up to about 40 degrees in depth. The component-based approach clearly outperforms the whole-face approach on all tests. Although this approach isproven to be more reliable, it is still too slow for real-time applications. That is the reason why a real-time face recognition system using the whole-face approach is implemented to recognize people in color video sequences.
Resumo:
Real-time geoparsing of social media streams (e.g. Twitter, YouTube, Instagram, Flickr, FourSquare) is providing a new 'virtual sensor' capability to end users such as emergency response agencies (e.g. Tsunami early warning centres, Civil protection authorities) and news agencies (e.g. Deutsche Welle, BBC News). Challenges in this area include scaling up natural language processing (NLP) and information retrieval (IR) approaches to handle real-time traffic volumes, reducing false positives, creating real-time infographic displays useful for effective decision support and providing support for trust and credibility analysis using geosemantics. I will present in this seminar on-going work by the IT Innovation Centre over the last 4 years (TRIDEC and REVEAL FP7 projects) in building such systems, and highlights our research towards improving trustworthy and credible of crisis map displays and real-time analytics for trending topics and influential social networks during major news worthy events.
Resumo:
We present the first climate prediction of the coming decade made with multiple models, initialized with prior observations. This prediction accrues from an international activity to exchange decadal predictions in near real-time, in order to assess differences and similarities, provide a consensus view to prevent over-confidence in forecasts from any single model, and establish current collective capability. We stress that the forecast is experimental, since the skill of the multi-model system is as yet unknown. Nevertheless, the forecast systems used here are based on models that have undergone rigorous evaluation and individually have been evaluated for forecast skill. Moreover, it is important to publish forecasts to enable open evaluation, and to provide a focus on climate change in the coming decade. Initialized forecasts of the year 2011 agree well with observations, with a pattern correlation of 0.62 compared to 0.31 for uninitialized projections. In particular, the forecast correctly predicted La Niña in the Pacific, and warm conditions in the north Atlantic and USA. A similar pattern is predicted for 2012 but with a weaker La Niña. Indices of Atlantic multi-decadal variability and Pacific decadal variability show no signal beyond climatology after 2015, while temperature in the Niño3 region is predicted to warm slightly by about 0.5 °C over the coming decade. However, uncertainties are large for individual years and initialization has little impact beyond the first 4 years in most regions. Relative to uninitialized forecasts, initialized forecasts are significantly warmer in the north Atlantic sub-polar gyre and cooler in the north Pacific throughout the decade. They are also significantly cooler in the global average and over most land and ocean regions out to several years ahead. However, in the absence of volcanic eruptions, global temperature is predicted to continue to rise, with each year from 2013 onwards having a 50 % chance of exceeding the current observed record. Verification of these forecasts will provide an important opportunity to test the performance of models and our understanding and knowledge of the drivers of climate change.
Resumo:
This research presents a novel multi-functional system for medical Imaging-enabled Assistive Diagnosis (IAD). Although the IAD demonstrator has focused on abdominal images and supports the clinical diagnosis of kidneys using CT/MRI imaging, it can be adapted to work on image delineation, annotation and 3D real-size volumetric modelling of other organ structures such as the brain, spine, etc. The IAD provides advanced real-time 3D visualisation and measurements with fully automated functionalities as developed in two stages. In the first stage, via the clinically driven user interface, specialist clinicians use CT/MRI imaging datasets to accurately delineate and annotate the kidneys and their possible abnormalities, thus creating “3D Golden Standard Models”. Based on these models, in the second stage, clinical support staff i.e. medical technicians interactively define model-based rules and parameters for the integrated “Automatic Recognition Framework” to achieve results which are closest to that of the clinicians. These specific rules and parameters are stored in “Templates” and can later be used by any clinician to automatically identify organ structures i.e. kidneys and their possible abnormalities. The system also supports the transmission of these “Templates” to another expert for a second opinion. A 3D model of the body, the organs and their possible pathology with real metrics is also integrated. The automatic functionality was tested on eleven MRI datasets (comprising of 286 images) and the 3D models were validated by comparing them with the metrics from the corresponding “3D Golden Standard Models”. The system provides metrics for the evaluation of the results, in terms of Accuracy, Precision, Sensitivity, Specificity and Dice Similarity Coefficient (DSC) so as to enable benchmarking of its performance. The first IAD prototype has produced promising results as its performance accuracy based on the most widely deployed evaluation metric, DSC, yields 97% for the recognition of kidneys and 96% for their abnormalities; whilst across all the above evaluation metrics its performance ranges between 96% and 100%. Further development of the IAD system is in progress to extend and evaluate its clinical diagnostic support capability through development and integration of additional algorithms to offer fully computer-aided identification of other organs and their abnormalities based on CT/MRI/Ultra-sound Imaging.
Resumo:
Following the thermodynamic formulation of a multifractal measure that was shown to enable the detection of large fluctuations at an early stage, here we propose a new index which permits us to distinguish events like financial crises in real time. We calculate the partition function from which we can obtain thermodynamic quantities analogous to the free energy and specific heat. The index is defined as the normalized energy variation and it can be used to study the behavior of stochastic time series, such as financial market daily data. Famous financial market crashes-Black Thursday (1929), Black Monday (1987) and the subprime crisis (2008)-are identified with clear and robust results. The method is also applied to the market fluctuations of 2011. From these results it appears as if the apparent crisis of 2011 is of a different nature to the other three. We also show that the analysis has forecasting capabilities. © 2012 Elsevier B.V. All rights reserved.
Resumo:
Electric power grids throughout the world suffer from serious inefficiencies associated with under-utilization due to demand patterns, engineering design and load following approaches in use today. These grids consume much of the world’s energy and represent a large carbon footprint. From material utilization perspectives significant hardware is manufactured and installed for this infrastructure often to be used at less than 20-40% of its operational capacity for most of its lifetime. These inefficiencies lead engineers to require additional grid support and conventional generation capacity additions when renewable technologies (such as solar and wind) and electric vehicles are to be added to the utility demand/supply mix. Using actual data from the PJM [PJM 2009] the work shows that consumer load management, real time price signals, sensors and intelligent demand/supply control offer a compelling path forward to increase the efficient utilization and carbon footprint reduction of the world’s grids. Underutilization factors from many distribution companies indicate that distribution feeders are often operated at only 70-80% of their peak capacity for a few hours per year, and on average are loaded to less than 30-40% of their capability. By creating strong societal connections between consumers and energy providers technology can radically change this situation. Intelligent deployment of smart sensors, smart electric vehicles, consumer-based load management technology very high saturations of intermittent renewable energy supplies can be effectively controlled and dispatched to increase the levels of utilization of existing utility distribution, substation, transmission, and generation equipment. The strengthening of these technology, society and consumer relationships requires rapid dissemination of knowledge (real time prices, costs & benefit sharing, demand response requirements) in order to incentivize behaviors that can increase the effective use of technological equipment that represents one of the largest capital assets modern society has created.
Resumo:
To address food safety concerns of the public regarding the potential transfer of recombinant DNA (cry1Ab) and protein (Cry1Ab) into the milk of cows fed genetically modified maize (MON810), a highly specific and sensitive quantitative real-time PCR (qPCR) and an ELISA were developed for monitoring suspicious presence of novel DNA and Cry1Ab protein in bovine milk. The developed assays were validated according to the assay validation criteria specified in the European Commission Decision 2002/657/EC. The detection limit and detection capability of the qPCR and ELISA were 100 copies of cry1Ab microL(-1) milk and 0.4 ng mL(-1) Cry1Ab, respectively. Recovery rates of 84.9% (DNA) and 97% (protein) and low (<15%) imprecision revealed the reliable and accurate estimations. A specific qPCR amplification and use of a specific antibody in ELISA ascertained the high specificity of the assays. Using these assays for 90 milk samples collected from cows fed either transgenic (n = 8) or non-transgenic (n = 7) rations for 6 months, neither cry1Ab nor Cry1Ab protein were detected in any analyzed sample at the assay detection limits.
Resumo:
Purpose: Selective retina therapy (SRT) has shown great promise compared to conventional retinal laser photocoagulation as it avoids collateral damage and selectively targets the retinal pigment epithelium (RPE). Its use, however, is challenging in terms of therapy monitoring and dosage because an immediate tissue reaction is not biomicroscopically discernibel. To overcome these limitations, real-time optical coherence tomography (OCT) might be useful to monitor retinal tissue during laser application. We have thus evaluated a proprietary OCT system for its capability of mapping optical changes introduced by SRT in retinal tissue. Methods: Freshly enucleated porcine eyes, covered in DMEM upon collection were utilized and a total of 175 scans from ex-vivo porcine eyes were analyzed. The porcine eyes were used as an ex-vivo model and results compared to two time-resolved OCT scans, recorded from a patient undergoing SRT treatment (SRT Vario, Medical Laser Center Lübeck). In addition to OCT, fluorescin angiography and fundus photography were performed on the patient and OCT scans were subsequently investigated for optical tissue changes linked to laser application. Results: Biomicroscopically invisible SRT lesions were detectable in OCT by changes in the RPE / Bruch's complex both in vivo and the porcine ex-vivo model. Laser application produced clearly visible optical effects such as hyperreflectivity and tissue distortion in the treated retina. Tissue effects were even discernible in time-resolved OCT imaging when no hyper-reflectivity persisted after treatment. Data from ex-vivo porcine eyes showed similar to identical optical changes while effects visible in OCT appeared to correlate with applied pulse energy, leading to an additional reflective layer when lesions became visible in indirect ophthalmoscopy. Conclusions: Our results support the hypothesis that real-time high-resolution OCT may be a promising modality to obtain additional information about the extent of tissue damage caused by SRT treatment. Data shows that our exvivo porcine model adequately reproduces the effects occurring in-vivo, and thus can be used to further investigate this promising imaging technique.
Resumo:
The concept of measurement-enabled production is based on integrating metrology systems into production processes and generated significant interest in industry, due to its potential to increase process capability and accuracy, which in turn reduces production times and eliminates defective parts. One of the most promising methods of integrating metrology into production is the usage of external metrology systems to compensate machine tool errors in real time. The development and experimental performance evaluation of a low-cost, prototype three-axis machine tool that is laser tracker assisted are described in this paper. Real-time corrections of the machine tool's absolute volumetric error have been achieved. As a result, significant increases in static repeatability and accuracy have been demonstrated, allowing the low-cost three-axis machine tool to reliably reach static positioning accuracies below 35 μm throughout its working volume without any prior calibration or error mapping. This is a significant technical development that demonstrated the feasibility of the proposed methods and can have wide-scale industrial applications by enabling low-cost and structural integrity machine tools that could be deployed flexibly as end-effectors of robotic automation, to achieve positional accuracies that were the preserve of large, high-precision machine tools.
Resumo:
Surface Plasmon Resonance (SPR) and localized surface plasmon resonance (LSPR) biosensors have brought a revolutionary change to in vitro study of biological and biochemical processes due to its ability to measure extremely small changes in surface refractive index (RI), binding equilibrium and kinetics. Strategies based on LSPR have been employed to enhance the sensitivity for a variety of applications, such as diagnosis of diseases, environmental analysis, food safety, and chemical threat detection. In LSPR spectroscopy, absorption and scattering of light are greatly enhanced at frequencies that excite the LSPR, resulting in a characteristic extinction spectrum that depends on the RI of the surrounding medium. Compositional and conformational change within the surrounding medium near the sensing surface could therefore be detected as shifts in the extinction spectrum. This dissertation specifically focuses on the development and evaluation of highly sensitive LSPR biosensors for in situ study of biomolecular binding process by incorporating nanotechnology. Compared to traditional methods for biomolecular binding studies, LSPR-based biosensors offer real-time, label free detection. First, we modified the gold sensing surface of LSPR-based biosensors using nanomaterials such as gold nanoparticles (AuNPs) and polymer to enhance surface absorption and sensitivity. The performance of this type of biosensors was evaluated on the application of small heavy metal molecule binding affinity study. This biosensor exhibited ∼7 fold sensitivity enhancement and binding kinetics measurement capability comparing to traditional biosensors. Second, a miniaturized cell culture system was integrated into the LSPR-based biosensor system for the purpose of real-time biomarker signaling pathway studies and drug efficacy studies with living cells. To the best of our knowledge, this is the first LSPR-based sensing platform with the capability of living cell studies. We demonstrated the living cell measurement ability by studying the VEGF signaling pathway in living SKOV-3 cells. Results have shown that the VEGF secretion level from SKOV-3 cells is 0.0137 ± 0.0012 pg per cell. Moreover, we have demonstrated bevacizumab drug regulation to the VEGF signaling pathway using this biosensor. This sensing platform could potentially help studying biomolecular binding kinetics which elucidates the underlying mechanisms of biotransportation and drug delivery.