964 resultados para Logic of many
Resumo:
Q. Meng and M.H. Lee, 'Biologically inspired automatic construction of cross-modal mapping in robotic eye/hand systems', IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2006,) ,4742-49, Beijing, 2006.
Resumo:
Dissertação apresentada à Universidade Fernando Pessoa como parte dos requisitos para a obtenção do grau de Mestre em Ciências da Comunicação, ramo de Marketing e Publicidade
Resumo:
This dissertation narrates the historical development of American evangelical missions to the poor from 1947-2005 and analyzes the discourse of its main parachurch proponents, especially World Vision, Compassion International, Food for the Hungry, Samaritan's urse, Sojourners, Evangelicals for Social Action, and the Christian Community Development Association. Although recent scholarship on evangelicalism has been prolific, much of the historical work has focused on earlier periods. Sociological and political scientific scholarship on the postwar period has been attracted mostly to controversies surrounding the Religious Right, leaving evangelicalism's resurgent concern for the poor relatively understudied. This dissertation addresses these lacunae. The study consists of three chronological parts, each marked by a distinctive model of mission to the poor. First, the 1950s were characterized by compassionate charity for individual emergencies, a model that cohered neatly with evangelicalism's individualism and emotionalism. This model should be regarded as the quintessential, bedrock evangelical theory of mission to the poor. It remained strong throughout the entire postwar period. Second, in the 1970s, a strong countercurrent emerged that advocated for penitent protest against structural injustice and underdevelopment. In contrast to the first model, it was distinguished by going against the grain of many aspects of evangelical culture, especially its reflexive patriotism and individualism. Third, in the 1990s, an important movement towards developing potential through hopeful holism gained prominence. Its advocates were confident that their integration of biblical principles with insights from contemporary economic development praxis would contribute to drastic, widespread reductions in poverty. This model signaled a new optimism in evangelicalism's engagement with the broader world. The increasing prominence of missions to the poor within American evangelicalism led to dramatic changes within the movement's worldview: by 2005, evangelicals were mostly unified in their expressed concern for the physical and social needs of the poor, a position that radically reversed their immediate postwar worldview of near-exclusive focus on the spiritual needs of individuals. Nevertheless, missions to the poor also paralleled, reinforced, and hastened the increasing fragmentation of evangelicalism's identity, as each missional model advocated for highly variant approaches to poverty amelioration that were undergirded by diverse sociological, political, and theological assumptions.
Resumo:
To support the diverse Quality of Service (QoS) requirements of real-time (e.g. audio/video) applications in integrated services networks, several routing algorithms that allow for the reservation of the needed bandwidth over a Virtual Circuit (VC) established on one of several candidate routes have been proposed. Traditionally, such routing is done using the least-loaded concept, and thus results in balancing the load across the set of candidate routes. In a recent study, we have established the inadequacy of this load balancing practice and proposed the use of load profiling as an alternative. Load profiling techniques allow the distribution of "available" bandwidth across a set of candidate routes to match the characteristics of incoming VC QoS requests. In this paper we thoroughly characterize the performance of VC routing using load profiling and contrast it to routing using load balancing and load packing. We do so both analytically and via extensive simulations of multi-class traffic routing in Virtual Path (VP) based networks. Our findings confirm that for routing guaranteed bandwidth flows in VP networks, load balancing is not desirable as it results in VP bandwidth fragmentation, which adversely affects the likelihood of accepting new VC requests. This fragmentation is more pronounced when the granularity of VC requests is large. Typically, this occurs when a common VC is established to carry the aggregate traffic flow of many high-bandwidth real-time sources. For VP-based networks, our simulation results show that our load-profiling VC routing scheme performs better or as well as the traditional load-balancing VC routing in terms of revenue under both skewed and uniform workloads. Furthermore, load-profiling routing improves routing fairness by proactively increasing the chances of admitting high-bandwidth connections.
Resumo:
In college courses dealing with material that requires mathematical rigor, the adoption of a machine-readable representation for formal arguments can be advantageous. Students can focus on a specific collection of constructs that are represented consistently. Examples and counterexamples can be evaluated. Assignments can be assembled and checked with the help of an automated formal reasoning system. However, usability and accessibility do not have a high priority and are not addressed sufficiently well in the design of many existing machine-readable representations and corresponding formal reasoning systems. In earlier work [Lap09], we attempt to address this broad problem by proposing several specific design criteria organized around the notion of a natural context: the sphere of awareness a working human user maintains of the relevant constructs, arguments, experiences, and background materials necessary to accomplish the task at hand. We report on our attempt to evaluate our proposed design criteria by deploying within the classroom a lightweight formal verification system designed according to these criteria. The lightweight formal verification system was used within the instruction of a common application of formal reasoning: proving by induction formal propositions about functional code. We present all of the formal reasoning examples and assignments considered during this deployment, most of which are drawn directly from an introductory text on functional programming. We demonstrate how the design of the system improves the effectiveness and understandability of the examples, and how it aids in the instruction of basic formal reasoning techniques. We make brief remarks about the practical and administrative implications of the system’s design from the perspectives of the student, the instructor, and the grader.
Resumo:
Research on the construction of logical overlay networks has gained significance in recent times. This is partly due to work on peer-to-peer (P2P) systems for locating and retrieving distributed data objects, and also scalable content distribution using end-system multicast techniques. However, there are emerging applications that require the real-time transport of data from various sources to potentially many thousands of subscribers, each having their own quality-of-service (QoS) constraints. This paper primarily focuses on the properties of two popular topologies found in interconnection networks, namely k-ary n-cubes and de Bruijn graphs. The regular structure of these graph topologies makes them easier to analyze and determine possible routes for real-time data than complete or irregular graphs. We show how these overlay topologies compare in their ability to deliver data according to the QoS constraints of many subscribers, each receiving data from specific publishing hosts. Comparisons are drawn on the ability of each topology to route data in the presence of dynamic system effects, due to end-hosts joining and departing the system. Finally, experimental results show the service guarantees and physical link stress resulting from efficient multicast trees constructed over both kinds of overlay networks.
Resumo:
An improved Boundary Contour System (BCS) and Feature Contour System (FCS) neural network model of preattentive vision is applied to large images containing range data gathered by a synthetic aperture radar (SAR) sensor. The goal of processing is to make structures such as motor vehicles, roads, or buildings more salient and more interpretable to human observers than they are in the original imagery. Early processing by shunting center-surround networks compresses signal dynamic range and performs local contrast enhancement. Subsequent processing by filters sensitive to oriented contrast, including short-range competition and long-range cooperation, segments the image into regions. The segmentation is performed by three "copies" of the BCS and FCS, of small, medium, and large scales, wherein the "short-range" and "long-range" interactions within each scale occur over smaller or larger distances, corresponding to the size of the early filters of each scale. A diffusive filling-in operation within the segmented regions at each scale produces coherent surface representations. The combination of BCS and FCS helps to locate and enhance structure over regions of many pixels, without the resulting blur characteristic of approaches based on low spatial frequency filtering alone.
Resumo:
The human urge to represent the three-dimensional world using two-dimensional pictorial representations dates back at least to Paleolithic times. Artists from ancient to modern times have struggled to understand how a few contours or color patches on a flat surface can induce mental representations of a three-dimensional scene. This article summarizes some of the recent breakthroughs in scientifically understanding how the brain sees that shed light on these struggles. These breakthroughs illustrate how various artists have intuitively understand paradoxical properties about how the brain sees, and have used that understanding to create great art. These paradoxical properties arise from how the brain forms the units of conscious visual perception; namely, representations of three-dimensional boundaries and surfaces. Boundaries and surfaces are computed in parallel cortical processing streams that obey computationally complementary properties. These streams interact at multiple levels to overcome their complementary weaknesses and to transform their complementary properties into consistent percepts. The article describes how properties of complementary consistency have guided the creation of many great works of art.
Resumo:
An improved Boundary Contour System (BCS) and Feature Contour System (FCS) neural network model of preattentive vision is applied to two large images containing range data gathered by a synthetic aperture radar (SAR) sensor. The goal of processing is to make structures such as motor vehicles, roads, or buildings more salient and more interpretable to human observers than they are in the original imagery. Early processing by shunting center-surround networks compresses signal dynamic range and performs local contrast enhancement. Subsequent processing by filters sensitive to oriented contrast, including short-range competition and long-range cooperation, segments the image into regions. Finally, a diffusive filling-in operation within the segmented regions produces coherent visible structures. The combination of BCS and FCS helps to locate and enhance structure over regions of many pixels, without the resulting blur characteristic of approaches based on low spatial frequency filtering alone.
Resumo:
The processes by which humans and other primates learn to recognize objects have been the subject of many models. Processes such as learning, categorization, attention, memory search, expectation, and novelty detection work together at different stages to realize object recognition. In this article, Gail Carpenter and Stephen Grossberg describe one such model class (Adaptive Resonance Theory, ART) and discuss how its structure and function might relate to known neurological learning and memory processes, such as how inferotemporal cortex can recognize both specialized and abstract information, and how medial temporal amnesia may be caused by lesions in the hippocampal formation. The model also suggests how hippocampal and inferotemporal processing may be linked during recognition learning.
Resumo:
A computational model of visual processing in the vertebrate retina provides a unified explanation of a range of data previously treated by disparate models. Three results are reported here: the model proposes a functional explanation for the primary feed-forward retinal circuit found in vertebrate retinae, it shows how this retinal circuit combines nonlinear adaptation with the desirable properties of linear processing, and it accounts for the origin of parallel transient (nonlinear) and sustained (linear) visual processing streams as simple variants of the same retinal circuit. The retina, owing to its accessibility and to its fundamental role in the initial transduction of light into neural signals, is among the most extensively studied neural structures in the nervous system. Since the pioneering anatomical work by Ramón y Cajal at the turn of the last century[1], technological advances have abetted detailed descriptions of the physiological, pharmacological, and functional properties of many types of retinal cells. However, the relationship between structure and function in the retina is still poorly understood. This article outlines a computational model developed to address fundamental constraints of biological visual systems. Neurons that process nonnegative input signals-such as retinal illuminance-are subject to an inescapable tradeoff between accurate processing in the spatial and temporal domains. Accurate processing in both domains can be achieved with a model that combines nonlinear mechanisms for temporal and spatial adaptation within three layers of feed-forward processing. The resulting architecture is structurally similar to the feed-forward retinal circuit connecting photoreceptors to retinal ganglion cells through bipolar cells. This similarity suggests that the three-layer structure observed in all vertebrate retinae[2] is a required minimal anatomy for accurate spatiotemporal visual processing. This hypothesis is supported through computer simulations showing that the model's output layer accounts for many properties of retinal ganglion cells[3],[4],[5],[6]. Moreover, the model shows how the retina can extend its dynamic range through nonlinear adaptation while exhibiting seemingly linear behavior in response to a variety of spatiotemporal input stimuli. This property is the basis for the prediction that the same retinal circuit can account for both sustained (X) and transient (Y) cat ganglion cells[7] by simple morphological changes. The ability to generate distinct functional behaviors by simple changes in cell morphology suggests that different functional pathways originating in the retina may have evolved from a unified anatomy designed to cope with the constraints of low-level biological vision.
Resumo:
Background: Irritable bowel syndrome (IBS) is a common disorder that affects 10–15% of the population. Although characterised by a lack of reliable biological markers, the disease state is increasingly viewed as a disorder of the brain-gut axis. In particular, accumulating evidence points to the involvement of both the central and peripheral serotonergic systems in disease symptomatology. Furthermore, altered tryptophan metabolism and indoleamine 2,3-dioxygenase (IDO) activity are hallmarks of many stress-related disorders. The kynurenine pathway of tryptophan degradation may serve to link these findings to the low level immune activation recently described in IBS. In this study, we investigated tryptophan degradation in a male IBS cohort (n = 10) and control subjects (n = 26). Methods: Plasma samples were obtained from patients and healthy controls. Tryptophan and its metabolites were measured by high performance liquid chromatography (HPLC) and neopterin, a sensitive marker of immune activation, was measured using a commercially available ELISA assay. Results: Both kynurenine levels and the kynurenine:tryptophan ratio were significantly increased in the IBS cohort compared with healthy controls. Neopterin was also increased in the IBS subjects and the concentration of the neuroprotective metabolite kynurenic acid was decreased, as was the kynurenic acid:kynurenine ratio. Conclusion: These findings suggest that the activity of IDO, the immunoresponsive enzyme which is responsible for the degradation of tryptophan along this pathway, is enhanced in IBS patients relative to controls. This study provides novel evidence for an immune-mediated degradation of tryptophan in a male IBS population and identifies the kynurenine pathway as a potential source of biomarkers in this debilitating condition.
Resumo:
Cerium dioxide (ceria) nanoparticles have been the subject of intense academic and industrial interest. Ceria has a host of applications but academic interest largely stems from their use in the modern automotive catalyst but it is also of interest because of many other application areas notably as the abrasive in chemical-mechanical planarisation of silicon substrates. Recently, ceria has been the focus of research investigating health effects of nanoparticles. Importantly, the role of non-stoichiometry in ceria nanoparticles is implicated in their biochemistry. Ceria has well understood non-stoichiometry based around the ease of formation of anion vacancies and these can form ordered superstructures based around the fluorite lattice structure exhibited by ceria. The anion vacancies are associated with localised or small polaron states formed by the electrons that remain after oxygen desorption. In simple terms these electrons combine with Ce4+ states to form Ce3+ states whose larger ionic radii is associated with a lattice expansion compared to stoichiometric CeO2. This is a very simplistic explanation and greater defect chemistry complexity is suggested by more recent work. Various authors have shown that vacancies are mobile and may result in vacancy clustering. Ceria nanoparticles are of particular interest because of the high activity and surface area of small particulates. The sensitivity of the cerium electronic band structure to environment would suggest that changes in the properties of ceria particles at nanoscale dimensions might be expected. Notably many authors report a lattice expansion with reducing particle size (largely confined to sub-10 nm particles). Most authors assign increased lattice dimensions to the presence of a surface stable Ce2O3 type layer at low nanoparticle dimensions. However, our understanding of oxide nanoparticles is limited and their full and quantitative characterisation offers serious challenges. In a series of chemical preparations by ourselves we see little evidence of a consistent model emerging to explain lattice parameter changes with nanoparticle size. Based on these results and a review of the literature it is worthwhile asking if a model of surface enhanced defect concentration is consistent with known cerium/cerium oxide chemistries, whether this is applicable to a range of different synthesis methods and if a more consistent description is possible. In Chapter one the science of cerium oxide is outlined including the crystal structure, defect chemistry and different oxidation states available. The uses and applications of cerium oxide are also discussed as well as modelling of the lattice parameter and the doping of the ceria lattice. Chapter two describes both the synthesis techniques and the analytical methods employed to execute this research. Chapter three focuses on high surface area ceria nano-particles and how these have been prepared using a citrate sol-gel precipitation method. Changes to the particle size have been made by calcining the ceria powders at different temperatures. X-ray diffraction methods were used to determine their lattice parameters. The particles sizes were also assessed using transmission electron microscopy (TEM), scanning electron microscopy (SEM), and BET, and, the lattice parameter was found to decrease with decreasing particle size. The results are discussed in light of the role played by surface tension effects. Chapter four describes the morphological and structural characterization of crystalline CeO2 nanoparticles prepared by forward and reverse precipitation techniques and compares these by powder x-ray diffraction (PXRD), nitrogen adsorption (BET) and high resolution transmission electron microscopy (HRTEM) analysis. The two routes give quite different materials although in both cases the products are essentially highly crystalline, dense particulates. It was found that the reverse precipitation technique gave the smallest crystallites with the narrowest size dispersion. This route also gave as-synthesised materials with higher surface areas. HRTEM confirmed the observations made from PXRD data and showed that the two methods resulted in quite different morphologies and surface chemistries. The forward route gives products with significantly greater densities of Ce3+ species compared to the reverse route. Data are explained using known precipitation chemistry and kinetic effects. Chapter five centres on the addition of terbia to ceria and has been investigated using XRD, XRF, XPS and TEM. Good solid solutions were formed across the entire composition range and there was no evidence for the formation of mixed phases or surface segregation over either the composition or temperature range investigated. Both Tb3+ and Tb4+ ions exist within the solution and the ratios of these cations are consistent with the addition of Tb8O15 to the fluorite ceria structure across a wide range of compositions. Local regions of anion vacancy ordering may be visible for small crystallites. There is no evidence of significant Ce3+ ion concentrations formed at the surface or in the bulk by the addition of terbia. The lattice parameter of these materials was seen to decrease with decreasing crystallite size. This is consistent with increased surface tension effects at small dimension. Chapter six reviews size related lattice parameter changes and surface defects in ceria nanocrystals. Ceria (CeO2) has many important applications, notably in catalysis. Many of its uses rely on generating nanodimensioned particles. Ceria has important redox chemistry where Ce4+ cations can be reversibly reduced to Ce3+ cations and associated anion vacancies. The significantly larger size of Ce3+ (compared with Ce4+) has been shown to result in lattice expansion. Many authors have observed lattice expansion in nanodimensioned crystals (nanocrystals), and these have been attributed to the presence of stabilized Ce3+ -anion vacancy combinations in these systems. Experimental results presented here show (i) that significant, but complex changes in the lattice parameter with size can occur in 2-500 nm crystallites, (ii) that there is a definitive relationship between defect chemistry and the lattice parameter in ceria nanocrystals, and (iii) that the stabilizing mechanism for the Ce3+ -anion vacancy defects at the surface of ceria nanocrystals is determined by the size, the surface status, and the analysis conditions. In this work, both lattice expansion and a more unusual lattice contraction in ultrafine nanocrystals are observed. The lattice deformations seen can be defined as a function of both the anion vacancy (hydroxyl) concentration in the nanocrystal and the intensity of the additional pressure imposed by the surface tension on the crystal. The expansion of lattice parameters in ceria nanocrystals is attributed to a number of factors, most notably, the presence of any hydroxyl moieties in the materials. Thus, a very careful understanding of the synthesis combined with characterization is required to understand the surface chemistry of ceria nanocrystals.
Resumo:
The abundance of many commercially important fish stocks are declining and this has led to widespread concern on the performance of traditional approach in fisheries management. Quantitative models are used for obtaining estimates of population abundance and the management advice is based on annual harvest levels (TAC), where only a certain amount of catch is allowed from specific fish stocks. However, these models are data intensive and less useful when stocks have limited historical information. This study examined whether empirical stock indicators can be used to manage fisheries. The relationship between indicators and the underlying stock abundance is not direct and hence can be affected by disturbances that may account for both transient and persistent effects. Methods from Statistical Process Control (SPC) theory such as the Cumulative Sum (CUSUM) control charts are useful in classifying these effects and hence they can be used to trigger management response only when a significant impact occurs to the stock biomass. This thesis explores how empirical indicators along with CUSUM can be used for monitoring, assessment and management of fish stocks. I begin my thesis by exploring various age based catch indicators, to identify those which are potentially useful in tracking the state of fish stocks. The sensitivity and response of these indicators towards changes in Spawning Stock Biomass (SSB) showed that indicators based on age groups that are fully selected to the fishing gear or Large Fish Indicators (LFIs) are most useful and robust across the range of scenarios considered. The Decision-Interval (DI-CUSUM) and Self-Starting (SS-CUSUM) forms are the two types of control charts used in this study. In contrast to the DI-CUSUM, the SS-CUSUM can be initiated without specifying a target reference point (‘control mean’) to detect out-of-control (significant impact) situations. The sensitivity and specificity of SS-CUSUM showed that the performances are robust when LFIs are used. Once an out-of-control situation is detected, the next step is to determine how much shift has occurred in the underlying stock biomass. If an estimate of this shift is available, they can be used to update TAC by incorporation into Harvest Control Rules (HCRs). Various methods from Engineering Process Control (EPC) theory were tested to determine which method can measure the shift size in stock biomass with the highest accuracy. Results showed that methods based on Grubb’s harmonic rule gave reliable shift size estimates. The accuracy of these estimates can be improved by monitoring a combined indicator metric of stock-recruitment and LFI because this may account for impacts independent of fishing. The procedure of integrating both SPC and EPC is known as Statistical Process Adjustment (SPA). A HCR based on SPA was designed for DI-CUSUM and the scheme was successful in bringing out-of-control fish stocks back to its in-control state. The HCR was also tested using SS-CUSUM in the context of data poor fish stocks. Results showed that the scheme will be useful for sustaining the initial in-control state of the fish stock until more observations become available for quantitative assessments.
Resumo:
The thesis examines cultural processes underpinning the emergence, institutionalisation and reproduction of class boundaries in Limerick city. The research aims to bring a new understanding to the contemporary context of the city’s urban regeneration programme. Acknowledging and recognising other contemporary studies of division and exclusion, the thesis creates a distinctive approach which focuses on uncovering the cultural roots of inequality, educational disadvantage, stigma and social exclusion and the dynamics of their social reproduction. Using Bateson’s concept of schismogenesis (1953), the thesis looks to the persistent, but fragmented culture of community and develops a heuristic ‘symbolic order of the city’. This is defined as “…a cultural structure, the meaning making aspect of hierarchy, the categorical structures of world understanding, the way Limerick people understand themselves, their local and larger world” (p. 37). This provides a very different departure point for exploring the basis for urban regeneration in Limerick (and everywhere). The central argument is that if we want to understand the present (multiple) crises in Limerick we need to understand the historical, anthropological and recursive processes underpinning ‘generalised patterns of rivalry and conflict’. In addition to exploring the historical roots of status and stigma in Limerick, the thesis explores the mythopoesis of persistent, recurrent narratives and labels that mark the boundaries of the city’s identities. The thesis examines the cultural and social function of ‘slagging’ as a vernacular and highly particularised form of ironic, ritualised and, often, ‘cruel’ medium of communication (often exclusion). This is combined with an etymology of the vocabulary of Limerick slang and its mythological base. By tracing the origins of many normalised patterns of Limerick speech ‘sayings’, which have long since forgotten their roots, the thesis demonstrates how they perform a significant contemporary function in maintaining and reinforcing symbolic mechanisms of inclusion/exclusion. The thesis combines historical and archival data with biographical interviews, ethnographic data married to a deep historical hermeneutic analysis of this political community.