911 resultados para streams
Resumo:
This article describes further evidence for a new neural network theory of biological motion perception. The theory clarifies why parallel streams Vl --> V2, Vl --> MT, and Vl --> V2 --> MT exist for static form and motion form processing among the areas Vl, V2, and MT of visual cortex. The theory suggests that the static form system (Static BCS) generates emergent boundary segmentations whose outputs are insensitive to direction-ofcontrast and insensitive to direction-of-motion, whereas the motion form system (Motion BCS) generates emergent boundary segmentations whose outputs are insensitive to directionof-contrast but sensitive to direction-of-motion. The theory is used to explain classical and recent data about short-range and long-range apparent motion percepts that have not yet been explained by alternative models. These data include beta motion; split motion; gamma motion and reverse-contrast gamma motion; delta motion; visual inertia; the transition from group motion to element motion in response to a Ternus display as the interstimulus interval (ISI) decreases; group motion in response to a reverse-contrast Ternus display even at short ISIs; speed-up of motion velocity as interflash distance increases or flash duration decreases; dependence of the transition from element motion to group motion on stimulus duration and size; various classical dependencies between flash duration, spatial separation, ISI, and motion threshold known as Korte's Laws; dependence of motion strength on stimulus orientation and spatial frequency; short-range and long-range form-color interactions; and binocular interactions of flashes to different eyes.
Resumo:
A neural network theory of :3-D vision, called FACADE Theory, is described. The theory proposes a solution of the classical figure-ground problem for biological vision. It does so by suggesting how boundary representations and surface representations are formed within a Boundary Contour System (BCS) and a Feature Contour System (FCS). The BCS and FCS interact reciprocally to form 3-D boundary and surface representations that arc mutually consistent. Their interactions generate 3-D percepts wherein occluding and occluded object completed, and grouped. The theory clarifies how preattentive processes of 3-D perception and figure-ground separation interact reciprocally with attentive processes of spatial localization, object recognition, and visual search. A new theory of stereopsis is proposed that predicts how cells sensitive to multiple spatial frequencies, disparities, and orientations are combined by context-sensitive filtering, competition, and cooperation to form coherent BCS boundary segmentations. Several factors contribute to figure-ground pop-out, including: boundary contrast between spatially contiguous boundaries, whether due to scenic differences in luminance, color, spatial frequency, or disparity; partially ordered interactions from larger spatial scales and disparities to smaller scales and disparities; and surface filling-in restricted to regions surrounded by a connected boundary. Phenomena such as 3-D pop-out from a 2-D picture, DaVinci stereopsis, a 3-D neon color spreading, completion of partially occluded objects, and figure-ground reversals are analysed. The BCS and FCS sub-systems model aspects of how the two parvocellular cortical processing streams that join the Lateral Geniculate Nucleus to prestriate cortical area V4 interact to generate a multiplexed representation of Form-And-Color-And-Depth, or FACADE, within area V4. Area V4 is suggested to support figure-ground separation and to interact. with cortical mechanisms of spatial attention, attentive objcect learning, and visual search. Adaptive Resonance Theory (ART) mechanisms model aspects of how prestriate visual cortex interacts reciprocally with a visual object recognition system in inferotemporal cortex (IT) for purposes of attentive object learning and categorization. Object attention mechanisms of the What cortical processing stream through IT cortex are distinguished from spatial attention mechanisms of the Where cortical processing stream through parietal cortex. Parvocellular BCS and FCS signals interact with the model What stream. Parvocellular FCS and magnocellular Motion BCS signals interact with the model Where stream. Reciprocal interactions between these visual, What, and Where mechanisms arc used to discuss data about visual search and saccadic eye movements, including fast search of conjunctive targets, search of 3-D surfaces, selective search of like-colored targets, attentive tracking of multi-element groupings, and recursive search of simultaneously presented targets.
Resumo:
A neural network model of 3-D visual perception and figure-ground separation by visual cortex is introduced. The theory provides a unified explanation of how a 2-D image may generate a 3-D percept; how figures pop-out from cluttered backgrounds; how spatially sparse disparity cues can generate continuous surface representations at different perceived depths; how representations of occluded regions can be completed and recognized without usually being seen; how occluded regions can sometimes be seen during percepts of transparency; how high spatial frequency parts of an image may appear closer than low spatial frequency parts; how sharp targets are detected better against a figure and blurred targets are detector better against a background; how low spatial frequency parts of an image may be fused while high spatial frequency parts are rivalrous; how sparse blue cones can generate vivid blue surface percepts; how 3-D neon color spreading, visual phantoms, and tissue contrast percepts are generated; how conjunctions of color-and-depth may rapidly pop-out during visual search. These explanations arise derived from an ecological analysis of how monocularly viewed parts of an image inherit the appropriate depth from contiguous binocularly viewed parts, as during DaVinci stereopsis. The model predicts the functional role and ordering of multiple interactions within and between the two parvocellular processing streams that join LGN to prestriate area V4. Interactions from cells representing larger scales and disparities to cells representing smaller scales and disparities are of particular importance.
Resumo:
A computational model of visual processing in the vertebrate retina provides a unified explanation of a range of data previously treated by disparate models. Three results are reported here: the model proposes a functional explanation for the primary feed-forward retinal circuit found in vertebrate retinae, it shows how this retinal circuit combines nonlinear adaptation with the desirable properties of linear processing, and it accounts for the origin of parallel transient (nonlinear) and sustained (linear) visual processing streams as simple variants of the same retinal circuit. The retina, owing to its accessibility and to its fundamental role in the initial transduction of light into neural signals, is among the most extensively studied neural structures in the nervous system. Since the pioneering anatomical work by Ramón y Cajal at the turn of the last century[1], technological advances have abetted detailed descriptions of the physiological, pharmacological, and functional properties of many types of retinal cells. However, the relationship between structure and function in the retina is still poorly understood. This article outlines a computational model developed to address fundamental constraints of biological visual systems. Neurons that process nonnegative input signals-such as retinal illuminance-are subject to an inescapable tradeoff between accurate processing in the spatial and temporal domains. Accurate processing in both domains can be achieved with a model that combines nonlinear mechanisms for temporal and spatial adaptation within three layers of feed-forward processing. The resulting architecture is structurally similar to the feed-forward retinal circuit connecting photoreceptors to retinal ganglion cells through bipolar cells. This similarity suggests that the three-layer structure observed in all vertebrate retinae[2] is a required minimal anatomy for accurate spatiotemporal visual processing. This hypothesis is supported through computer simulations showing that the model's output layer accounts for many properties of retinal ganglion cells[3],[4],[5],[6]. Moreover, the model shows how the retina can extend its dynamic range through nonlinear adaptation while exhibiting seemingly linear behavior in response to a variety of spatiotemporal input stimuli. This property is the basis for the prediction that the same retinal circuit can account for both sustained (X) and transient (Y) cat ganglion cells[7] by simple morphological changes. The ability to generate distinct functional behaviors by simple changes in cell morphology suggests that different functional pathways originating in the retina may have evolved from a unified anatomy designed to cope with the constraints of low-level biological vision.
Resumo:
Aim: Diabetes is an important barometer of health system performance. This chronic condition is a source of significant morbidity, premature mortality and a major contributor to health care costs. There is an increasing focus internationally, and more recently nationally, on system, practice and professional-level initiatives to promote the quality of care. The aim of this thesis was to investigate the ‘quality chasm’ around the organisation and delivery of diabetes care in general practice, to explore GPs’ attitudes to engaging in quality improvement activities and to examine efforts to improve the quality of diabetes care in Ireland from practice to policy. Methods: Quantitative and qualitative methods were used. As part of a mixed methods sequential design, a postal survey of 600 GPs was conducted to assess the organization of care. This was followed by an in-depth qualitative study using semi-structured interviews with a purposive sample of 31 GPs from urban and rural areas. The qualitative methodology was also used to examine GPs’ attitudes to engaging in quality improvement. Data were analysed using a Framework approach. A 2nd observation study was used to assess the quality of care in 63 practices with a special interest in diabetes. Data on 3010 adults with Type 2 diabetes from 3 primary care initiatives were analysed and the results were benchmarked against national guidelines and standards of care in the UK. The final study was an instrumental case study of policy formulation. Semi-structured interviews were conducted with 15 members of the Expert Advisory Group (EAG) for Diabetes. Thematic analysis was applied to the data using 3 theories of the policy process as analytical tools. Results: The survey response rate was 44% (n=262). Results suggested care delivery was largely unstructured; 45% of GPs had a diabetes register (n=157), 53% reported using guidelines (n=140), 30% had formal call recall system (n=78) and 24% had none of these organizational features (n=62). Only 10% of GPs had a formal shared protocol with the local hospital specialist diabetes team (n=26). The lack of coordination between settings was identified as a major barrier to providing optimal care leading to waiting times, overburdened hospitals and avoidable duplication. The lack of remuneration for chronic disease management had a ripple effect also creating costs for patients and apathy among GPs. There was also a sense of inertia around quality improvement activities particularly at a national level. This attitude was strongly influenced by previous experiences of change in the health system. In contrast GP’s spoke positively about change at a local level which was facilitated by a practice ethos, leadership and special interest in diabetes. The 2nd quantitative study found that practices with a special interest in diabetes achieved a standard of care comparable to the UK in terms of the recording of clinical processes of care and the achievement of clinical targets; 35% of patients reached the HbA1c target of <6.5% compared to 26% in England and Wales. With regard to diabetes policy formulation, the evolving process of action and inaction was best described by the Multiple Streams Theory. Within the EAG, the formulation of recommendations was facilitated by overarching agreement on the “obvious” priorities while the details of proposals were influenced by personal preferences and local capacity. In contrast the national decision-making process was protracted and ambiguous. The lack of impetus from senior management coupled with the lack of power conferred on the EAG impeded progress. Conclusions: The findings highlight the inconsistency of diabetes care in Ireland. The main barriers to optimal diabetes management center on the organization and coordination of care at the systems level with consequences for practice, providers and patients. Quality improvement initiatives need to stimulate a sense of ownership and interest among frontline service providers to address the local sense of inertia to national change. To date quality improvement in diabetes care has been largely dependent the “special interest” of professionals. The challenge for the Irish health system is to embed this activity as part of routine practice, professional responsibility and the underlying health care culture.
Resumo:
Anaerobic digestion (AD) of biodegradable waste is an environmentally and economically sustainable solution which incorporates waste treatment and energy recovery. The organic fraction of municipal solid waste (OFMSW), which comprises mostly of food waste, is highly degradable under anaerobic conditions. Biogas produced from OFMSW, when upgraded to biomethane, is recognised as one of the most sustainable renewable biofuels and can also be one of the cheapest sources of biomethane if a gate fee is associated with the substrate. OFMSW is a complex and heterogeneous material which may have widely different characteristics depending on the source of origin and collection system used. The research presented in this thesis investigates the potential energy resource from a wide range of organic waste streams through field and laboratory research on real world samples. OFMSW samples collected from a range of sources generated methane yields ranging from 75 to 160 m3 per tonne. Higher methane yields are associated with source segregated food waste from commercial catering premises as opposed to domestic sources. The inclusion of garden waste reduces the specific methane yield from household organic waste. In continuous AD trials it was found that a conventional continuously stirred tank reactor (CSTR) gave the highest specific methane yields at a moderate organic loading rate of 2 kg volatile solids (VS) m-3 digester day-1 and a hydraulic retention time of 30 days. The average specific methane yield obtained at this loading rate in continuous digestion was 560 ± 29 L CH4 kg-1 VS which exceeded the biomethane potential test result by 5%. The low carbon to nitrogen ratio (C: N <14:1) associated with canteen food waste lead to increasing concentrations of volatile fatty acids in line with high concentrations of ammonia nitrogen at higher organic loading rates. At an organic loading rate of 4 kg VS m-3day-1 the specific methane yield dropped considerably (381 L CH4 kg-1 VS), the pH rose to 8.1 and free ammonia (NH3 ) concentrations reached toxicity levels towards the end of the trial (ca. 950 mg L-1). A novel two phase AD reactor configuration consisting of a series of sequentially fed leach bed reactors connected to an upflow anaerobic sludge blanket (UASB) demonstrated a high rate of organic matter decay but resulted in lower specific methane yields (384 L CH4 kg-1 VS) than the conventional CSTR system.
Resumo:
Recent years have witnessed a rapid growth in the demand for streaming video over the Internet, exposing challenges in coping with heterogeneous device capabilities and varying network throughput. When we couple this rise in streaming with the growing number of portable devices (smart phones, tablets, laptops) we see an ever-increasing demand for high-definition videos online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide us with graceful changes in video quality, all while respecting our viewing satisfaction. In this context the use of well-known scalable media streaming techniques, commonly known as scalable coding, is an attractive solution and the focus of this thesis. In this thesis we investigate the transmission of existing scalable video models over a lossy network and determine how the variation in viewable quality is affected by packet loss. This work focuses on leveraging the benefits of scalable media, while reducing the effects of data loss on achievable video quality. The overall approach is focused on the strategic packetisation of the underlying scalable video and how to best utilise error resiliency to maximise viewable quality. In particular, we examine the manner in which scalable video is packetised for transmission over lossy networks and propose new techniques that reduce the impact of packet loss on scalable video by selectively choosing how to packetise the data and which data to transmit. We also exploit redundancy techniques, such as error resiliency, to enhance the stream quality by ensuring a smooth play-out with fewer changes in achievable video quality. The contributions of this thesis are in the creation of new segmentation and encapsulation techniques which increase the viewable quality of existing scalable models by fragmenting and re-allocating the video sub-streams based on user requirements, available bandwidth and variations in loss rates. We offer new packetisation techniques which reduce the effects of packet loss on viewable quality by leveraging the increase in the number of frames per group of pictures (GOP) and by providing equality of data in every packet transmitted per GOP. These provide novel mechanisms for packetizing and error resiliency, as well as providing new applications for existing techniques such as Interleaving and Priority Encoded Transmission. We also introduce three new scalable coding models, which offer a balance between transmission cost and the consistency of viewable quality.
Resumo:
It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain
Resumo:
While cochlear implants (CIs) usually provide high levels of speech recognition in quiet, speech recognition in noise remains challenging. To overcome these difficulties, it is important to understand how implanted listeners separate a target signal from interferers. Stream segregation has been studied extensively in both normal and electric hearing, as a function of place of stimulation. However, the effects of pulse rate, independent of place, on the perceptual grouping of sequential sounds in electric hearing have not yet been investigated. A rhythm detection task was used to measure stream segregation. The results of this study suggest that while CI listeners can segregate streams based on differences in pulse rate alone, the amount of stream segregation observed decreases as the base pulse rate increases. Further investigation of the perceptual dimensions encoded by the pulse rate and the effect of sequential presentation of different stimulation rates on perception could be beneficial for the future development of speech processing strategies for CIs.
Resumo:
We explore the possibilities of obtaining compression in video through modified sampling strategies using multichannel imaging systems. The redundancies in video streams are exploited through compressive sampling schemes to achieve low power and low complexity video sensors. The sampling strategies as well as the associated reconstruction algorithms are discussed. These compressive sampling schemes could be implemented in the focal plane readout hardware resulting in drastic reduction in data bandwidth and computational complexity.
Resumo:
The neo-classical economics view that behavior is driven by - and reflective of - hedonic utility is challenged by psychologists' demonstrations of cases in which actions do not merely reveal preferences but rather create them. In this view, preferences are frequently constructed in the moment and are susceptible to fleeting situational factors; problematically, individuals are insensitive to the impact of such factors on their behavior, misattributing utility caused by these irrelevant factors to stable underlying preferences. Consequently, subsequent behavior might reflect not hedonic utility but rather this erroneously imputed utility that lingers in memory. Here we review the roles of these streams of utility in shaping preferences, and discuss how neuroimaging offers unique possibilities for disentangling their independent contributions to behavior.
Resumo:
Alewife, Alosa pseudoharengus, populations occur in two discrete life-history variants, an anadromous form and a landlocked (freshwater resident) form. Landlocked populations display a consistent pattern of life-history divergence from anadromous populations, including earlier age at maturity, smaller adult body size, and reduced fecundity. In Connecticut (USA), dams constructed on coastal streams separate anadromous spawning runs from lake-resident landlocked populations. Here, we used sequence data from the mtDNA control region and allele frequency data from five microsatellite loci to ask whether coastal Connecticut landlocked alewife populations are independently evolved from anadromous populations or whether they share a common freshwater ancestor. We then used microsatellite data to estimate the timing of the divergence between anadromous and landlocked populations. Finally, we examined anadromous and landlocked populations for divergence in foraging morphology and used divergence time estimates to calculate the rate of evolution for foraging traits. Our results indicate that landlocked populations have evolved multiple times independently. Tests of population divergence and estimates of gene flow show that landlocked populations are genetically isolated, whereas anadromous populations exchange genes. These results support a 'phylogenetic raceme' model of landlocked alewife divergence, with anadromous populations forming an ancestral core from which landlocked populations independently diverged. Divergence time estimates suggest that landlocked populations diverged from a common anadromous ancestor no longer than 5000 years ago and perhaps as recently as 300 years ago, depending on the microsatellite mutation rate assumed. Examination of foraging traits reveals landlocked populations to have significantly narrower gapes and smaller gill raker spacings than anadromous populations, suggesting that they are adapted to foraging on smaller prey items. Estimates of evolutionary rates (in haldanes) indicate rapid evolution of foraging traits, possibly in response to changes in available resources.
Resumo:
Interactions between natural selection and environmental change are well recognized and sit at the core of ecology and evolutionary biology. Reciprocal interactions between ecology and evolution, eco-evolutionary feedbacks, are less well studied, even though they may be critical for understanding the evolution of biological diversity, the structure of communities and the function of ecosystems. Eco-evolutionary feedbacks require that populations alter their environment (niche construction) and that those changes in the environment feed back to influence the subsequent evolution of the population. There is strong evidence that organisms influence their environment through predation, nutrient excretion and habitat modification, and that populations evolve in response to changes in their environment at time-scales congruent with ecological change (contemporary evolution). Here, we outline how the niche construction and contemporary evolution interact to alter the direction of evolution and the structure and function of communities and ecosystems. We then present five empirical systems that highlight important characteristics of eco-evolutionary feedbacks: rotifer-algae chemostats; alewife-zooplankton interactions in lakes; guppy life-history evolution and nutrient cycling in streams; avian seed predators and plants; and tree leaf chemistry and soil processes. The alewife-zooplankton system provides the most complete evidence for eco-evolutionary feedbacks, but other systems highlight the potential for eco-evolutionary feedbacks in a wide variety of natural systems.
Resumo:
The safe disposal of liquid wastes associated with oil and gas production in the United States is a major challenge given their large volumes and typically high levels of contaminants. In Pennsylvania, oil and gas wastewater is sometimes treated at brine treatment facilities and discharged to local streams. This study examined the water quality and isotopic compositions of discharged effluents, surface waters, and stream sediments associated with a treatment facility site in western Pennsylvania. The elevated levels of chloride and bromide, combined with the strontium, radium, oxygen, and hydrogen isotopic compositions of the effluents reflect the composition of Marcellus Shale produced waters. The discharge of the effluent from the treatment facility increased downstream concentrations of chloride and bromide above background levels. Barium and radium were substantially (>90%) reduced in the treated effluents compared to concentrations in Marcellus Shale produced waters. Nonetheless, (226)Ra levels in stream sediments (544-8759 Bq/kg) at the point of discharge were ~200 times greater than upstream and background sediments (22-44 Bq/kg) and above radioactive waste disposal threshold regulations, posing potential environmental risks of radium bioaccumulation in localized areas of shale gas wastewater disposal.
Resumo:
Mountaintop mining (MTM) is the primary procedure for surface coal exploration within the central Appalachian region of the eastern United States, and it is known to contaminate streams in local watersheds. In this study, we measured the chemical and isotopic compositions of water samples from MTM-impacted tributaries and streams in the Mud River watershed in West Virginia. We systematically document the isotopic compositions of three major constituents: sulfur isotopes in sulfate (δ(34)SSO4), carbon isotopes in dissolved inorganic carbon (δ(13)CDIC), and strontium isotopes ((87)Sr/(86)Sr). The data show that δ(34)SSO4, δ(13)CDIC, Sr/Ca, and (87)Sr/(86)Sr measured in saline- and selenium-rich MTM impacted tributaries are distinguishable from those of the surface water upstream of mining impacts. These tracers can therefore be used to delineate and quantify the impact of MTM in watersheds. High Sr/Ca and low (87)Sr/(86)Sr characterize tributaries that originated from active MTM areas, while tributaries from reclaimed MTM areas had low Sr/Ca and high (87)Sr/(86)Sr. Leaching experiments of rocks from the watershed show that pyrite oxidation and carbonate dissolution control the solute chemistry with distinct (87)Sr/(86)Sr ratios characterizing different rock sources. We propose that MTM operations that access the deeper Kanawha Formation generate residual mined rocks in valley fills from which effluents with distinctive (87)Sr/(86)Sr and Sr/Ca imprints affect the quality of the Appalachian watersheds.