13 resultados para False consciousness
em Boston University Digital Common
Resumo:
Comments on an article entitled `No Good News for DATA,' by Norman Lillegard in the Spring 1994 issue of `Cross Currents' magazine. Lillegard's stand that the android Commander Data in `Star Trek' fails to satisfy the biblical conception of persons; Lillegard's presentation of models that espouse functionalist theories of mind; Views of the essence of the human person.
Resumo:
The concept of attention has been used in many senses, often without clarifying how or why attention works as it does. Attention, like consciousness, is often described in a disembodied way. The present article summarizes neural models and supportive data and how attention is linked to processes of learning, expectation, competition, and consciousness. A key them is that attention modulates cortical self-organization and stability. Perceptual and cognitive neocortex is organized into six main cell layers, with characteristic sub-lamina. Attention is part of unified design of bottom-up, horizontal, and top-down interactions among indentified cells in laminar cortical circuits. Neural models clarify how attention may be allocated during processes of visual perception, learning and search; auditory streaming and speech perception; movement target selection during sensory-motor control; mental imagery and fantasy; and hallucination during mental disorders, among other processes.
Resumo:
A full understanding of consciouness requires that we identify the brain processes from which conscious experiences emerge. What are these processes, and what is their utility in supporting successful adaptive behaviors? Adaptive Resonance Theory (ART) predicted a functional link between processes of Consciousness, Learning, Expectation, Attention, Resonance, and Synchrony (CLEARS), includes the prediction that "all conscious states are resonant states." This connection clarifies how brain dynamics enable a behaving individual to autonomously adapt in real time to a rapidly changing world. The present article reviews theoretical considerations that predicted these functional links, how they work, and some of the rapidly growing body of behavioral and brain data that have provided support for these predictions. The article also summarizes ART models that predict functional roles for identified cells in laminar thalamocortical circuits, including the six layered neocortical circuits and their interactions with specific primary and higher-order specific thalamic nuclei and nonspecific nuclei. These prediction include explanations of how slow perceptual learning can occur more frequently in superficial cortical layers. ART traces these properties to the existence of intracortical feedback loops, and to reset mechanisms whereby thalamocortical mismatches use circuits such as the one from specific thalamic nuclei to nonspecific thalamic nuclei and then to layer 4 of neocortical areas via layers 1-to-5-to-6-to-4.
Resumo:
Many kinds of human states of consciousness have been distinguished, including colourful or anomalous experiences that are felt to have spiritual significance by most people who have them. The neurosciences have isolated brain-state correlates for some of these colourful states of consciousness, thereby strengthening the hypothesis that these experiences are mediated by the brain. This result both challenges metaphysically dualist accounts of human nature and suggests that any adequate causal explanation of colourful experiences would have to make detailed reference to the evolutionary and genetic conditions that give rise to brains capable of such conscious phenomena. This paper quickly surveys types of conscious states and neurological interpretations of them. In order to deal with the question of the significance of such experiences, the paper then attempts to identify evolutionary and genetic constraints on proposals for causal explanations of such experiences. The conclusion is that a properly sensitive evolutionary account of human consciousness supports a rebuttal of the argument that the cognitive content of colourful experiences is pure delusion, but that this evolutionary account also heavily constrains what might be inferred theologically from such experiences. They are not necessarily delusory, therefore, but they are often highly misleading. Their significance must be construed consistently with this conclusion.
Resumo:
Africa faces problems of ecological devastation caused by economic exploitation, rapid population growth, and poverty. Capitalism, residual colonialism, and corruption undermine Africa's efforts to forge a better future. The dissertation describes how in Africa the mounting ecological crisis has religious, political, and economic roots that enable and promote social and environmental harm. It presents the thesis that religious traditions, including their ethical expressions, can effectively address the crisis, ameliorate its impacts, and advocate for social and environmental betterment, now and in the future. First, it examines African traditional religion and Christian teaching, which together provide the foundation for African Christianity. Critical examination of both religious worldviews uncovers their complementary emphases on human responsibility toward planet Earth and future generations. Second, an analysis of the Gwembe Tonga of Chief Simamba explores the interconnectedness of all elements of the universe in African cosmologies. In Africa, an interdependent, participatory relationship exists between the world of animals, the world of humans, and the Creator. In discussing the annual lwiindi (rain calling) ceremony of Simamba, the study explores ecological overtones of African religions. Such rituals illustrate the involvement of ancestors and high gods in maintaining ecological integrity. Third, the foundation of the African morality of abundant life is explored. Across Sub-Saharan Africa, ancestors' teachings are the foundation of morality; ancestors are guardians of the land. A complementary teaching that Christ is the ecological ancestor of all life can direct ethical responses to the ecological crisis. Fourth, the eco-social implications of ubuntu (what it means to be fully human) are examined. Some aspects of ubuntu are criticized in light of economic inequalities and corruption in Africa. However, ubuntu can be transformed to advocate for eco-social liberation. Fifth, the study recognizes that in some cases conflicts exist between ecological values and religious teachings. This conflict is examined in terms of the contrast between awareness of socioeconomic problems caused by population growth, on the one hand, and advocacy of a traditional African morality of abundant children, on the other hand. A change in the latter religious view is needed since overpopulation threatens sustainable living and the future of Earth. The dissertation concludes that the identification of Jesus with African ancestors and theological recognition of Jesus as the ecological ancestor, woven together with ubuntu, an ethic of interconnectedness, should characterize African consciousness and promote resolution of the socio-ecological crisis.
Resumo:
This study documents, analyzes, and interprets Korean American United Methodist (KAUM) clergywomen‘s experiences in and understandings of the church. It examines contributions these (and potentially, other) clergywomen might make to Wesleyan ecclesiology generally, and particular ways United Methodists live out their faith in transitional, diverse, and global contexts. The project attempts to re-vision existing Wesleyan ecclesial discourse in the United Methodist Church (UMC) by recognizing and incorporating the contributions of racial-ethnic clergy as expressed through their leadership and practices of faith. A "practice-theory-practice" model of practical theology was used to pay systematic attention to the practical locus of the inquiries. Twenty Korean American United Methodist clergywomen were interviewed by telephone, using a voluntary sampling technique to ascertain how they both experienced the church and understood and lived out various practices of faith, including preaching, participation in and administration of the sacraments, preparation for ordained ministry, and other spiritual practices such as prayer, worship, retreats, and journaling. The dissertation summarizes those findings, provides contextual and historical interpretation, and then analyzes their responses in relation to Wesleyan theology, MinJung (mass of people) theology, and the theology of YeoSung (women who display dignity and honor as human beings). This study identifies the extraordinary call of the KAUM clergywomen interviewees to be bridge builders, strong nurturers, wounded healers, committed educators, breakers of old stereotypes, persistent seekers to fulfill God‘s call, and ecclesial leaders with ―tragic consciousness‖ who can disrupt marginality and facilitate the creative transformation of Han (a deep experience of suffering and oppression) into a constructive energy capable of shaping a new reality. According to this study, KAUM clergywomen‘s experiences and practices of faith as ecclesial leaders strengthen Wesleyan ecclesiology in terms of the UMC‘s efforts to be an inclusive church through connectionalism, and its commitment to social justice. MinJung theology and the theology of YeoSung, in their respective understandings of the church, broaden Wesleyan ecclesiology and enable the Church to be more relevant in a global context by embracing those who have not been normative theological subjects.
Resumo:
Anomalies are unusual and significant changes in a network's traffic levels, which can often involve multiple links. Diagnosing anomalies is critical for both network operators and end users. It is a difficult problem because one must extract and interpret anomalous patterns from large amounts of high-dimensional, noisy data. In this paper we propose a general method to diagnose anomalies. This method is based on a separation of the high-dimensional space occupied by a set of network traffic measurements into disjoint subspaces corresponding to normal and anomalous network conditions. We show that this separation can be performed effectively using Principal Component Analysis. Using only simple traffic measurements from links, we study volume anomalies and show that the method can: (1) accurately detect when a volume anomaly is occurring; (2) correctly identify the underlying origin-destination (OD) flow which is the source of the anomaly; and (3) accurately estimate the amount of traffic involved in the anomalous OD flow. We evaluate the method's ability to diagnose (i.e., detect, identify, and quantify) both existing and synthetically injected volume anomalies in real traffic from two backbone networks. Our method consistently diagnoses the largest volume anomalies, and does so with a very low false alarm rate.
Resumo:
One of TCP's critical tasks is to determine which packets are lost in the network, as a basis for control actions (flow control and packet retransmission). Modern TCP implementations use two mechanisms: timeout, and fast retransmit. Detection via timeout is necessarily a time-consuming operation; fast retransmit, while much quicker, is only effective for a small fraction of packet losses. In this paper we consider the problem of packet loss detection in TCP more generally. We concentrate on the fact that TCP's control actions are necessarily triggered by inference of packet loss, rather than conclusive knowledge. This suggests that one might analyze TCP's packet loss detection in a standard inferencing framework based on probability of detection and probability of false alarm. This paper makes two contributions to that end: First, we study an example of more general packet loss inference, namely optimal Bayesian packet loss detection based on round trip time. We show that for long-lived flows, it is frequently possible to achieve high detection probability and low false alarm probability based on measured round trip time. Second, we construct an analytic performance model that incorporates general packet loss inference into TCP. We show that for realistic detection and false alarm probabilities (as are achievable via our Bayesian detector) and for moderate packet loss rates, the use of more general packet loss inference in TCP can improve throughput by as much as 25%.
Resumo:
Object detection can be challenging when the object class exhibits large variations. One commonly-used strategy is to first partition the space of possible object variations and then train separate classifiers for each portion. However, with continuous spaces the partitions tend to be arbitrary since there are no natural boundaries (for example, consider the continuous range of human body poses). In this paper, a new formulation is proposed, where the detectors themselves are associated with continuous parameters, and reside in a parameterized function space. There are two advantages of this strategy. First, a-priori partitioning of the parameter space is not needed; the detectors themselves are in a parameterized space. Second, the underlying parameters for object variations can be learned from training data in an unsupervised manner. In profile face detection experiments, at a fixed false alarm number of 90, our method attains a detection rate of 75% vs. 70% for the method of Viola-Jones. In hand shape detection, at a false positive rate of 0.1%, our method achieves a detection rate of 99.5% vs. 98% for partition based methods. In pedestrian detection, our method reduces the miss detection rate by a factor of three at a false positive rate of 1%, compared with the method of Dalal-Triggs.
Resumo:
The problem of discovering frequent poly-regions (i.e. regions of high occurrence of a set of items or patterns of a given alphabet) in a sequence is studied, and three efficient approaches are proposed to solve it. The first one is entropy-based and applies a recursive segmentation technique that produces a set of candidate segments which may potentially lead to a poly-region. The key idea of the second approach is the use of a set of sliding windows over the sequence. Each sliding window covers a sequence segment and keeps a set of statistics that mainly include the number of occurrences of each item or pattern in that segment. Combining these statistics efficiently yields the complete set of poly-regions in the given sequence. The third approach applies a technique based on the majority vote, achieving linear running time with a minimal number of false negatives. After identifying the poly-regions, the sequence is converted to a sequence of labeled intervals (each one corresponding to a poly-region). An efficient algorithm for mining frequent arrangements of intervals is applied to the converted sequence to discover frequently occurring arrangements of poly-regions in different parts of DNA, including coding regions. The proposed algorithms are tested on various DNA sequences producing results of significant biological meaning.
Resumo:
Under natural viewing conditions, a single depthful percept of the world is consciously seen. When dissimilar images are presented to corresponding regions of the two eyes, binocular rivalyr may occur, during which the brain consciously perceives alternating percepts through time. How do the same brain mechanisms that generate a single depthful percept of the world also cause perceptual bistability, notably binocular rivalry? What properties of brain representations correspond to consciously seen percepts? A laminar cortical model of how cortical areas V1, V2, and V4 generate depthful percepts is developed to explain and quantitatively simulate binocualr rivalry data. The model proposes how mechanisms of cortical developement, perceptual grouping, and figure-ground perception lead to signle and rivalrous percepts. Quantitative model simulations include influences of contrast changes that are synchronized with switches in the dominant eye percept, gamma distribution of dominant phase durations, piecemeal percepts, and coexistence of eye-based and stimulus-based rivalry. The model also quantitatively explains data about multiple brain regions involved in rivalry, effects of object attention on switching between superimposed transparent surfaces, and monocular rivalry. These data explanations are linked to brain mechanisms that assure non-rivalrous conscious percepts. To our knowledge, no existing model can explain all of these phenomena.
Resumo:
When we look at a scene, how do we consciously see surfaces infused with lightness and color at the correct depths? Random Dot Stereograms (RDS) probe how binocular disparity between the two eyes can generate such conscious surface percepts. Dense RDS do so despite the fact that they include multiple false binocular matches. Sparse stereograms do so even across large contrast-free regions with no binocular matches. Stereograms that define occluding and occluded surfaces lead to surface percepts wherein partially occluded textured surfaces are completed behind occluding textured surfaces at a spatial scale much larger than that of the texture elements themselves. Earlier models suggest how the brain detects binocular disparity, but not how RDS generate conscious percepts of 3D surfaces. A neural model predicts how the layered circuits of visual cortex generate these 3D surface percepts using interactions between visual boundary and surface representations that obey complementary computational rules.
Resumo:
This article introduces a new neural network architecture, called ARTMAP, that autonomously learns to classify arbitrarily many, arbitrarily ordered vectors into recognition categories based on predictive success. This supervised learning system is built up from a pair of Adaptive Resonance Theory modules (ARTa and ARTb) that are capable of self-organizing stable recognition categories in response to arbitrary sequences of input patterns. During training trials, the ARTa module receives a stream {a^(p)} of input patterns, and ARTb receives a stream {b^(p)} of input patterns, where b^(p) is the correct prediction given a^(p). These ART modules are linked by an associative learning network and an internal controller that ensures autonomous system operation in real time. During test trials, the remaining patterns a^(p) are presented without b^(p), and their predictions at ARTb are compared with b^(p). Tested on a benchmark machine learning database in both on-line and off-line simulations, the ARTMAP system learns orders of magnitude more quickly, efficiently, and accurately than alternative algorithms, and achieves 100% accuracy after training on less than half the input patterns in the database. It achieves these properties by using an internal controller that conjointly maximizes predictive generalization and minimizes predictive error by linking predictive success to category size on a trial-by-trial basis, using only local operations. This computation increases the vigilance parameter ρa of ARTa by the minimal amount needed to correct a predictive error at ARTb· Parameter ρa calibrates the minimum confidence that ARTa must have in a category, or hypothesis, activated by an input a^(p) in order for ARTa to accept that category, rather than search for a better one through an automatically controlled process of hypothesis testing. Parameter ρa is compared with the degree of match between a^(p) and the top-down learned expectation, or prototype, that is read-out subsequent to activation of an ARTa category. Search occurs if the degree of match is less than ρa. ARTMAP is hereby a type of self-organizing expert system that calibrates the selectivity of its hypotheses based upon predictive success. As a result, rare but important events can be quickly and sharply distinguished even if they are similar to frequent events with different consequences. Between input trials ρa relaxes to a baseline vigilance pa When ρa is large, the system runs in a conservative mode, wherein predictions are made only if the system is confident of the outcome. Very few false-alarm errors then occur at any stage of learning, yet the system reaches asymptote with no loss of speed. Because ARTMAP learning is self stabilizing, it can continue learning one or more databases, without degrading its corpus of memories, until its full memory capacity is utilized.