970 resultados para Transparent
Resumo:
This report describes a system which maintains canonical expressions for designators under a set of equalities. Substitution is used to maintain all knowledge in terms of these canonical expressions. A partial order on designators, termed the better-name relation, is used in the choice of canonical expressions. It is shown that with an appropriate better-name relation an important engineering reasoning technique, propagation of constraints, can be implemented as a special case of this substitution process. Special purpose algebraic simplification procedures are embedded such that they interact effectively with the equality system. An electrical circuit analysis system is developed which relies upon constraint propagation and algebraic simplification as primary reasoning techniques. The reasoning is guided by a better-name relation in which referentially transparent terms are preferred to referentially opaque ones. Multiple description of subcircuits are shown to interact strongly with the reasoning mechanism.
Resumo:
Timmis J and Neal M J. An artificial immune system for data analysis. In Proceedings of 3rd international workshop on information processing in cells and tissues (IPCAT), Indianapolis, U.S.A., 1999.
Resumo:
K. Rasmani and Q. Shen. Subsethood-based fuzzy modelling and classification. Proceedings of the 2004 UK Workshop on Computational Intelligence, pages 181-188.
Resumo:
University of Pretoria / Dissertation / Department of Church History and Church Policy / Advised by Prof J W Hofmeyr
Resumo:
This paper presents the design and implementation of an infrastructure that enables any Web application, regardless of its current state, to be stopped and uninstalled from a particular server, transferred to a new server, then installed, loaded, and resumed, with all these events occurring "on the fly" and totally transparent to clients. Such functionalities allow entire applications to fluidly move from server to server, reducing the overhead required to administer the system, and increasing its performance in a number of ways: (1) Dynamic replication of new instances of applications to several servers to raise throughput for scalability purposes, (2) Moving applications to servers to achieve load balancing or other resource management goals, (3) Caching entire applications on servers located closer to clients.
Resumo:
How does the laminar organization of cortical circuitry in areas VI and V2 give rise to 3D percepts of stratification, transparency, and neon color spreading in response to 2D pictures and 3D scenes? Psychophysical experiments have shown that such 3D percepts are sensitive to whether contiguous image regions have the same relative contrast polarity (dark-light or lightdark), yet long-range perceptual grouping is known to pool over opposite contrast polarities. The ocularity of contiguous regions is also critical for neon color spreading: Having different ocularity despite the contrast relationship that favors neon spreading blocks the spread. In addition, half visible points in a stereogram can induce near-depth transparency if the contrast relationship favors transparency in the half visible areas. It thus seems critical to have the whole contrast relationship in a monocular configuration, since splitting it between two stereogram images cancels the effect. What adaptive functions of perceptual grouping enable it to both preserve sensitivity to monocular contrast and also to pool over opposite contrasts? Aspects of cortical development, grouping, attention, perceptual learning, stereopsis and 3D planar surface perception have previously been analyzed using a 3D LAMINART model of cortical areas VI, V2, and V4. The present work consistently extends this model to show how like-polarity competition between VI simple cells in layer 4 may be combined with other LAMINART grouping mechanisms, such as cooperative pooling of opposite polarities at layer 2/3 complex cells. The model also explains how the Metelli Rules can lead to transparent percepts, how bistable transparency percepts can arise in which either surface can be perceived as transparent, and how such a transparency reversal can be facilitated by an attention shift. The like-polarity inhibition prediction is consistent with lateral masking experiments in which two f1anking Gabor patches with the same contrast polarity as the target increase the target detection threshold when they approach the target. It is also consistent with LAMINART simulations of cortical development. Other model explanations and testable predictions will also be presented.
Resumo:
Under natural viewing conditions, a single depthful percept of the world is consciously seen. When dissimilar images are presented to corresponding regions of the two eyes, binocular rivalyr may occur, during which the brain consciously perceives alternating percepts through time. How do the same brain mechanisms that generate a single depthful percept of the world also cause perceptual bistability, notably binocular rivalry? What properties of brain representations correspond to consciously seen percepts? A laminar cortical model of how cortical areas V1, V2, and V4 generate depthful percepts is developed to explain and quantitatively simulate binocualr rivalry data. The model proposes how mechanisms of cortical developement, perceptual grouping, and figure-ground perception lead to signle and rivalrous percepts. Quantitative model simulations include influences of contrast changes that are synchronized with switches in the dominant eye percept, gamma distribution of dominant phase durations, piecemeal percepts, and coexistence of eye-based and stimulus-based rivalry. The model also quantitatively explains data about multiple brain regions involved in rivalry, effects of object attention on switching between superimposed transparent surfaces, and monocular rivalry. These data explanations are linked to brain mechanisms that assure non-rivalrous conscious percepts. To our knowledge, no existing model can explain all of these phenomena.
Resumo:
CONFIGR (CONtour FIgure GRound) is a computational model based on principles of biological vision that completes sparse and noisy image figures. Within an integrated vision/recognition system, CONFIGR posits an initial recognition stage which identifies figure pixels from spatially local input information. The resulting, and typically incomplete, figure is fed back to the “early vision” stage for long-range completion via filling-in. The reconstructed image is then re-presented to the recognition system for global functions such as object recognition. In the CONFIGR algorithm, the smallest independent image unit is the visible pixel, whose size defines a computational spatial scale. Once pixel size is fixed, the entire algorithm is fully determined, with no additional parameter choices. Multi-scale simulations illustrate the vision/recognition system. Open-source CONFIGR code is available online, but all examples can be derived analytically, and the design principles applied at each step are transparent. The model balances filling-in as figure against complementary filling-in as ground, which blocks spurious figure completions. Lobe computations occur on a subpixel spatial scale. Originally designed to fill-in missing contours in an incomplete image such as a dashed line, the same CONFIGR system connects and segments sparse dots, and unifies occluded objects from pieces locally identified as figure in the initial recognition stage. The model self-scales its completion distances, filling-in across gaps of any length, where unimpeded, while limiting connections among dense image-figure pixel groups that already have intrinsic form. Long-range image completion promises to play an important role in adaptive processors that reconstruct images from highly compressed video and still camera images.
Resumo:
This article applies a recent theory of 3-D biological vision, called FACADE Theory, to explain several percepts which Kanizsa pioneered. These include 3-D pop-out of an occluding form in front of an occluded form, leading to completion and recognition of the occluded form; 3-D transparent and opaque percepts of Kanizsa squares, with and without Varin wedges; and interactions between percepts of illusory contours, brightness, and depth in response to 2-D Kanizsa images. These explanations clarify how a partially occluded object representation can be completed for purposes of object recognition, without the completed part of the representation necessarily being seen. The theory traces these percepts to neural mechanisms that compensate for measurement uncertainty and complementarity at individual cortical processing stages by using parallel and hierarchical interactions among several cortical processing stages. These interactions are modelled by a Boundary Contour System (BCS) that generates emergent boundary segmentations and a complementary Feature Contour System (FCS) that fills-in surface representations of brightness, color, and depth. The BCS and FCS interact reciprocally with an Object Recognition System (ORS) that binds BCS boundary and FCS surface representations into attentive object representations. The BCS models the parvocellular LGN→Interblob→Interstripe→V4 cortical processing stream, the FCS models the parvocellular LGN→Blob→Thin Stripe→V4 cortical processing stream, and the ORS models inferotemporal cortex.
Resumo:
We review recent advances in all-optical OFDM technologies and discuss the performance of a field trial of a 2 Tbit/s Coherent WDM over 124 km with distributed Raman amplification. The results indicate that careful optimisation of the Raman pumps is essential. We also consider how all-optical OFDM systems perform favourably against energy consumption when compared with alternative coherent detection schemes. We argue that, in an energy constrained high-capacity transmission system, direct detected all-optical OFDM with 'ideal' Raman amplification is an attractive candidate for metro area datacentre interconnects with ~100 km fibre spans, with an overall energy requirement at least three times lower than coherent detection techniques.
Resumo:
The aging population in many countries brings into focus rising healthcare costs and pressure on conventional healthcare services. Pervasive healthcare has emerged as a viable solution capable of providing a technology-driven approach to alleviate such problems by allowing healthcare to move from the hospital-centred care to self-care, mobile care, and at-home care. The state-of-the-art studies in this field, however, lack a systematic approach for providing comprehensive pervasive healthcare solutions from data collection to data interpretation and from data analysis to data delivery. In this thesis we introduce a Context-aware Real-time Assistant (CARA) architecture that integrates novel approaches with state-of-the-art technology solutions to provide a full-scale pervasive healthcare solution with the emphasis on context awareness to help maintaining the well-being of elderly people. CARA collects information about and around the individual in a home environment, and enables accurately recognition and continuously monitoring activities of daily living. It employs an innovative reasoning engine to provide accurate real-time interpretation of the context and current situation assessment. Being mindful of the use of the system for sensitive personal applications, CARA includes several mechanisms to make the sophisticated intelligent components as transparent and accountable as possible, it also includes a novel cloud-based component for more effective data analysis. To deliver the automated real-time services, CARA supports interactive video and medical sensor based remote consultation. Our proposal has been validated in three application domains that are rich in pervasive contexts and real-time scenarios: (i) Mobile-based Activity Recognition, (ii) Intelligent Healthcare Decision Support Systems and (iii) Home-based Remote Monitoring Systems.
Resumo:
It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain
Resumo:
We introduce a class of optical media based on adiabatically modulated, dielectric-only, and potentially extremely low-loss, photonic crystals (PC). The media we describe represent a generalization of the eikonal limit of transformation optics (TO). The basis of the concept is the possibility to fit some equal frequency surfaces of certain PCs with elliptic surfaces, allowing them to mimic the dispersion relation of light in anisotropic effective media. PC cloaks and other TO devices operating at visible wavelengths can be constructed from optically transparent substances such as glasses, whose attenuation coefficient can be as small as 10 dB/km, suggesting the TO design methodology can be applied to the development of optical devices not limited by the losses inherent to metal-based, passive metamaterials.
Resumo:
The increase in antibiotic resistance and the dearth of novel antibiotics have become a growing concern among policy-makers. A combination of financial, scientific, and regulatory challenges poses barriers to antibiotic innovation. However, each of these three challenges provides an opportunity to develop pathways for new business models to bring novel antibiotics to market. Pull-incentives that pay for the outputs of research and development (R&D) and push-incentives that pay for the inputs of R&D can be used to increase innovation for antibiotics. Financial incentives might be structured to promote delinkage of a company's return on investment from revenues of antibiotics. This delinkage strategy might not only increase innovation, but also reinforce rational use of antibiotics. Regulatory approval, however, should not and need not compromise safety and efficacy standards to bring antibiotics with novel mechanisms of action to market. Instead regulatory agencies could encourage development of companion diagnostics, test antibiotic combinations in parallel, and pool and make transparent clinical trial data to lower R&D costs. A tax on non-human use of antibiotics might also create a disincentive for non-therapeutic use of these drugs. Finally, the new business model for antibiotic innovation should apply the 3Rs strategy for encouraging collaborative approaches to R&D in innovating novel antibiotics: sharing resources, risks, and rewards.
Resumo:
Poly(ethylene oxide) (PEO) is one of the most researched synthetic polymers due to the complex behavior which arises from the interplay of the hydrophilic and hydrophobic sites on the polymer chain. PEO in ethanol forms an opaque gel-like mixture with a partially crystalline structure. Addition of a small amount of water disrupts the gel: 5 wt % PEO in ethanol becomes a transparent solution with the addition of 4 vol % water. The phase behavior of PEO in mixed solvents have been studied using small-angle neutron scattering (SANS). PEO solutions (5 wt % PEO) which contain 4 vol % - 10 vol % (and higher) water behave as an athermal polymer solution and the phase behavior changes from UCST to LCST rapidly as the fraction of water is increased. 2 wt % PEO in water and 10 wt % PEO in ethanol/ water mixtures are examined to assess the role of hydration. The observed phase behavior is consistent with a hydration layer forming upon the addition of water as the system shifts from UCST to LCST behavior. At the molecular level, two or three water molecules can hydrate one PEO monomer (water molecules form a sheath around the PEO macromolecule) which is consistent with the suppression of crystallization and change in the mentioned phase behavior as observed by SANS. The clustering effect of aqueous PEO solution (M.W of PEO = 90,000 g/mol) is monitored as an excess scattering intensity at low-Q. Clustering intensity at Q = 0.004 Å^-1 is used for evaluating the clustering effect. The clustering intensity is proportional to the inverse temperature and levels off when the temperature is less than 50 ˚C. When the temperature is increased over 50 ˚C, the clustering intensity starts decreasing. The clustering of PEO is monitored in ethanol/ water mixtures. The clustering intensity increases as the fraction of water is increased. Based on the solvation intensity behavior, we confirmed that the ethanol/ water mixtures obey a random solvent mixing rule, whereby solvent mixtures are better at solvating the polymer that any of the two solvents. The solution behavior of PEO in ethanol was investigated in the presence of salt (CaCl2) using SANS. Binding of Ca2+ ions to the PEO oxygens transforms the neutral polymer to a weakly charged polyelectrolyte. We observed that the PEO/ethanol solution is better solvated at higher salt concentration due to the electrostatic repulsion of weakly charged monomers. The association of the Ca2+ ions with the PEO oxygen atoms transforms the neutral polymer to a weakly charged polyelectrolyte and gives rise to repulsive interactions between the PEO/Ca2+ complexes. Addition of salt disrupts the gel, which is consistent with better solvation as the salt concentration is increased. Moreover, SANS shows that the phase behavior of PEO/ethanol changes from UCST to LCST as the salt concentration is increased.