48 resultados para Many-Valued Intellectual System
Resumo:
This text is concerned with the intellectual and social alienation experienced by a twentieth century German writer (1906 - ).·the alienation begins in the context of German society, but this context is later globalised. The thesis first discusses the social and· intellectual origins and the salient features of this alienated stance, before proceeding to a detailed analysis of its recurring symptoms and later intensification in each of the author's main works, chronologically surveyed, supported by reference to minor writings. From the novels of the thirties' showing the burgher-artist conflict, and its symbolic dichotomies, the renunciation of traditional German values, and the ambiguous confrontation with new disruptive socio-political forces, we move to the post-war trilogy (1951-54), with its roots in the German social and political experience of the thirties' onwards. The latter, however, is merely a background for the presentation of a much more comprehensive view of the human condition:- a pessimistic vision of the repetitiveness and incorrigibility of this condition, the possibility of the apocalypse, the bankruptcy and ineffectiveness of European religion and culture, the 'absurd' meaninglessness of history, the intellectual artist's position and role(s) in mass-culture and an abstract, technologised mass-society, the central theme of fragmentation - of the structure of reality, society and personality, the artist's relation to this fragmentation, intensified in the twentieth,century. Style and language are consonant with this world-picture. Many of these features recur in the travel-books (1958-61); diachronic as well as synchronic approaches characterise the presentation of various modes of contemporary society in America, Russia, France and other European countries. Important features of intellectual alienation are:- the changelessness of historical motifs (e.g. tyranny, aggression), the conventions of burgher society, both old and new forms, the qualitative depreciation and standardisation of living, industrialisation and technology in complex, vulnerable and concemtrated urban societies, ambiguities of fragmented pluralism. Reference is made .to other travel-writers.
Resumo:
The thesis offers a comparative interdisciplinary approach to the examination of the intellectual debates about the relationship between individual and society in the GDR under Honecker. It shows that there was not only a continuum of debate between the academic disciplines, but also from the radical critics of the GDR leadership such as Robert Havemann, Rudolf Bahro and Stefan Heym through the social scientists, literary critics and legal theorists working in the academic institutions to theorists close to the GDR leadership. It also shows that the official line and policy of the ruling party itself on the question of the individual and society was not static over the period, but changed in response to internal and external pressures. Over the period 1971 - 1989 greater emphasis was placed by many intellectuals on the individual, his needs and interests. It was increasingly recognised that conflicts could exist between the individual and society in GDR socialism. Whereas the radical critics argued that these conflicts were due to features of GDR society, such as the hierarchical system of labour functions and bureaucracy, and extrapolated from this a general conflict between the political leadership and population, orthodox critics argued that conflicts existed between a specific individual and society and were largely due to external and historical factors. The internal critics also pointed to the social phenomena which were detrimental to the individual's development in the GDR, but they put forward less radical solutions. With the exception of a few radical young writers, all theorists studied in this thesis gave precedence to social interests over individual interests and so did not advocate a return to `individualistic' positions. The continuity of sometimes quite controversial discussions in the GDR academic journals and the flexibility of the official line and policy suggests that it is inappropriate to refer to GDR society under Honecker simply as totalitarian, although it did have some totalitarian features. What the thesis demonstrates is the existence of `Teiloffentlichkeiten' in which critical discussion is conducted even as the official, orthodox line is given out for public consumption in the high-circulation media.
Resumo:
This thesis deals with the problem of Information Systems design for Corporate Management. It shows that the results of applying current approaches to Management Information Systems and Corporate Modelling fully justify a fresh look to the problem. The thesis develops an approach to design based on Cybernetic principles and theories. It looks at Management as an informational process and discusses the relevance of regulation theory to its practice. The work proceeds around the concept of change and its effects on the organization's stability and survival. The idea of looking at organizations as viable systems is discussed and a design to enhance survival capacity is developed. It takes Ashby's theory of adaptation and developments on ultra-stability as a theoretical framework and considering conditions for learning and foresight deduces that a design should include three basic components: A dynamic model of the organization- environment relationships; a method to spot significant changes in the value of the essential variables and in a certain set of parameters; and a Controller able to conceive and change the other two elements and to make choices among alternative policies. Further considerations of the conditions for rapid adaptation in organisms composed of many parts, and the law of Requisite Variety determine that successful adaptive behaviour requires certain functional organization. Beer's model of viable organizations is put in relation to Ashby's theory of adaptation and regulation. The use of the Ultra-stable system as abstract unit of analysis permits developing a rigorous taxonomy of change; it starts distinguishing between change with in behaviour and change of behaviour to complete the classification with organizational change. It relates these changes to the logical categories of learning connecting the topic of Information System design with that of organizational learning.
Resumo:
This research aimed to provide a comparative analysis of South Asian and White British students in their academic attainment at school and university and in their search for employment. Data were gathered by using a variety of methodological techniques. Completed postal questionnaires were received from 301 South Asian and White British undergraduates from 12 British universities, who were in their final year of study in 1985. In depth interviews were also conducted with 49 graduates who were a self selected group from the original sample. Additional information was also collected by using diary report forms and by administering a second postal questionnaire to selected South Asian and White British participants. It was found that while the pre-university qualifications of the White British and South Asian undergraduates did not differ considerably, many members in the latter group had travelled a more arduous path to academic success. For some South Asians, school experiences included the confrontation of racist attitudes and behaviour, both from teachers and peers. The South Asian respondents in this study were more likely than their White British counterparts, to have attempted some C.S.E. examinations, obtained some of their `O' levels in the Sixth Form and retaken their `A' levels. As a result the South Asians were on average older than their White British peers when entering university. A small sample of South Asians also found that the effects of racism were perpetuated in higher education where they faced difficulty both academically and socially. Overall, however, since going to university most South Asians felt further drawn towards their `cultural background', this often being their own unique view of `Asianess'. Regarding their plans after graduation, it was found that South Asians were more likely to opt for further study, believing that they needed to be better qualified than their White British counterparts. For those South Asians who were searching for work, it was noted that they were better qualified, willing to accept a lower minimum salary, had made more job applications and had started searching for work earlier than the comparable White British participants. Also, although generally they were not having difficulty in obtaining interviews, South Asian applicants were less likely to receive an offer of employment. In the final analysis examining their future plans, it was found that a large proportion of South Asian graduates were aspiring towards self employment.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
Classification of metamorphic rocks is normally carried out using a poorly defined, subjective classification scheme making this an area in which many undergraduate geologists experience difficulties. An expert system to assist in such classification is presented which is capable of classifying rocks and also giving further details about a particular rock type. A mixed knowledge representation is used with frame, semantic and production rule systems available. Classification in the domain requires that different facets of a rock be classified. To implement this, rocks are represented by 'context' frames with slots representing each facet. Slots are satisfied by calling a pre-defined ruleset to carry out the necessary inference. The inference is handled by an interpreter which uses a dependency graph representation for the propagation of evidence. Uncertainty is handled by the system using a combination of the MYCIN certainty factor system and the Dempster-Shafer range mechanism. This allows for positive and negative reasoning, with rules capable of representing necessity and sufficiency of evidence, whilst also allowing the implementation of an alpha-beta pruning algorithm to guide question selection during inference. The system also utilizes a semantic net type structure to allow the expert to encode simple relationships between terms enabling rules to be written with a sensible level of abstraction. Using frames to represent rock types where subclassification is possible allows the knowledge base to be built in a modular fashion with subclassification frames only defined once the higher level of classification is functioning. Rulesets can similarly be added in modular fashion with the individual rules being essentially declarative allowing for simple updating and maintenance. The knowledge base so far developed for metamorphic classification serves to demonstrate the performance of the interpreter design whilst also moving some way towards providing a useful assistant to the non-expert metamorphic petrologist. The system demonstrates the possibilities for a fully developed knowledge base to handle the classification of igneous, sedimentary and metamorphic rocks. The current knowledge base and interpreter have been evaluated by potential users and experts. The results of the evaluation show that the system performs to an acceptable level and should be of use as a tool for both undergraduates and researchers from outside the metamorphic petrography field. .
Resumo:
In recent years, freshwater fish farmers have come under increasing pressure from the Water Authorities to control the quality of their farm effluents. This project aimed to investigate methods of treating aquacultural effluent in an efficient and cost-effective manner, and to incorporate the knowledge gained into an Expert System which could then be used in an advice service to farmers. From the results of this research it was established that sedimentation and the use of low pollution diets are the only cost effective methods of controlling the quality of fish farm effluents. Settlement has been extensively investigated and it was found that the removal of suspended solids in a settlement pond is only likely to be effective if the inlet solids concentration is in excess of 8 mg/litre. The probability of good settlement can be enhanced by keeping the ratio of length/retention time (a form of mean fluid velocity) below 4.0 metres/minute. The removal of BOD requires inlet solids concentrations in excess of 20 mg/litre to be effective, and this is seldom attained on commercial fish farms. Settlement, generally, does not remove appreciable quantities of ammonia from effluents, but algae can absorb ammonia by nutrient uptake under certain conditions. The use of low pollution, high performance diets gives pollutant yields which are low when compared with published figures obtained by many previous workers. Two Expert Systems were constructed, both of which diagnose possible causes of poor effluent quality on fish farms and suggest solutions. The first system uses knowledge gained from a literature review and the second employs the knowledge obtained from this project's experimental work. Consent details for over 100 fish farms were obtained from the public registers kept by the Water Authorities. Large variations in policy from one Authority to the next were found. These data have been compiled in a computer file for ease of comparison.
Resumo:
In many areas of northern India, salinity renders groundwater unsuitable for drinking and even for irrigation. Though membrane treatment can be used to remove the salt, there are some drawbacks to this approach e.g. (1) depletion of the groundwater due to over-abstraction, (2) saline contamination of surface water and soil caused by concentrate disposal and (3) high electricity usage. To address these issues, a system is proposed in which a photovoltaic-powered reverse osmosis (RO) system is used to irrigate a greenhouse (GH) in a stand-alone arrangement. The concentrate from the RO is supplied to an evaporative cooling system, thus reducing the volume of the concentrate so that finally it can be evaporated in a pond to solid for safe disposal. Based on typical meteorological data for Delhi, calculations based on mass and energy balance are presented to assess the sizing and cost of the system. It is shown that solar radiation, freshwater output and evapotranspiration demand are readily matched due to the approximately linear relation among these variables. The demand for concentrate varies independently, however, thus favouring the use of a variable recovery arrangement. Though enough water may be harvested from the GH roof to provide year-round irrigation, this would require considerable storage. Some practical options for storage tanks are discussed. An alternative use of rainwater is in misting to reduce peak temperatures in the summer. An example optimised design provides internal temperatures below 30EC (monthly average daily maxima) for 8 months of the year and costs about €36,000 for the whole system with GH floor area of 1000 m2 . Further work is needed to assess technical risks relating to scale-deposition in the membrane and evaporative pads, and to develop a business model that will allow such a project to succeed in the Indian rural context.
Resumo:
Manufacturing systems that are heavily dependent upon direct workers have an inherent complexity that the system designer is often ill-equipped to understand. This complexity is due to the interactions that cause variations in performance of the workers. Variation in human performance can be explained by many factors, however one important factor that is not currently considered in any detail during the design stage is the physical working environment. This paper presents the findings of ongoing research investigating human performance within manufacturing systems. It sets out to identify the form of the relationships that exist between changes in physical working environmental variables and operator performance. These relationships can provide managers with a decision basis when designing and managing manufacturing systems and their environments.
Resumo:
Once the factory worker was considered to be a necessary evil, soon to be replaced by robotics and automation. Today, many manufacturers appreciate that people in direct productive roles can provide important flexibility and responsiveness, and so significantly contribute to business success. The challenge is no longer to design people out of the factory, but to design factory environment that help to get the best performance from people. This paper describes research that has set out to help to achieve this by expanding the capabilities of simulation modeling tools currently used by practitioners.
Resumo:
Recent National Student Surveys revealed that many U.K. university students are dissatisfied with the timeliness and usefulness of the feedback received from their tutors. Ensuring timeliness in marking often results in a reduction in the quality of feedback. In Computer Science where learning relies on practising and learning from mistakes, feedback that pin-points errors and explains means of improvement is important to achieve a good student learning experience. Though suitable use of Information and Communication Technology should alleviate this problem, existing Virtual Learning Environments and e-Assessment applications such as Blackboard/WebCT, BOSS, MarkTool and GradeMark are inadequate to support a coursework assessment process that promotes timeliness and usefulness of feedback while maintaining consistency in marking involving multiple tutors. We have developed a novel Internet application, called eCAF, for facilitating an efficient and transparent coursework assessment and feedback process. The eCAF system supports detailed marking scheme editing and enables tutors to use such schemes to pin-point errors in students' work so as to provide helpful feedback efficiently. Tutors can also highlight areas in a submitted work and associate helpful feedback that clearly links to the identified mistakes and the respective marking criteria. In light of the results obtained from a recent trial of eCAF, we discuss how the key features of eCAF may facilitate an effective and efficient coursework assessment and feedback process.
Resumo:
Spare parts warehousing decision-making plays an important role in today's manufacturing industry as it derives an optimum inventory policy for the organizations. Previous research on spare parts warehousing decision-making did not deal with the problem holistically considering all the subjective and objective criteria of operational and strategic needs of the manufacturing companies in the process industry. This study reviews current relevant literature and develops a conceptual framework (an integrated group decision support system) for selecting the most effective warehousing option for the process industry using the analytic hierarchy process (AHP). The framework has been applied to a multinational cement manufacturing company in the UK. Three site visits, eight formal interviews, and several discussions have been undertaken with personnel of the organization, many of which have more than 20 years of experience, in order to apply the proposed decision support system (DSS). Subsequently, the DSS has been validated through a questionnaire survey in order to establish its usefulness, effectiveness for warehousing decision-making, and the possibility of adoption. The proposed DSS is an integrated framework for selecting the best warehousing option for business excellence in any manufacturing organization.
Resumo:
Self-adaptation is emerging as an increasingly important capability for many applications, particularly those deployed in dynamically changing environments, such as ecosystem monitoring and disaster management. One key challenge posed by Dynamically Adaptive Systems (DASs) is the need to handle changes to the requirements and corresponding behavior of a DAS in response to varying environmental conditions. Berry et al. previously identified four levels of RE that should be performed for a DAS. In this paper, we propose the Levels of RE for Modeling that reify the original levels to describe RE modeling work done by DAS developers. Specifically, we identify four types of developers: the system developer, the adaptation scenario developer, the adaptation infrastructure developer, and the DAS research community. Each level corresponds to the work of a different type of developer to construct goal model(s) specifying their requirements. We then leverage the Levels of RE for Modeling to propose two complementary processes for performing RE for a DAS. We describe our experiences with applying this approach to GridStix, an adaptive flood warning system, deployed to monitor the River Ribble in Yorkshire, England.
Resumo:
Many passengers experience discomfort during flight because of the effect of low humidity on the skin, eyes, throat, and nose. In this physiological study, we have investigated whether flight and low humidity also affect the tympanic membrane. From previous studies, a decrease in admittance of the tympanic membrane through drying might be expected to affect the buffering capacity of the middle ear and to disrupt automatic pressure regulation. This investigation involved an observational study onboard an aircraft combined with experiments in an environmental chamber, where the humidity could be controlled but could not be made to be as low as during flight. For the flight study, there was a linear relationship between the peak compensated static admittance of the tympanic membrane and relative humidity with a constant of proportionality of 0.00315 mmho/% relative humidity. The low humidity at cruise altitude (minimum 22.7 %) was associated with a mean decrease in admittance of about 20 % compared with measures in the airport. From the chamber study, we further found that a mean decrease in relative humidity of 23.4 % led to a significant decrease in mean admittance by 0.11 mmho [F(1,8) = 18.95, P = 0.002], a decrease of 9.4 %. The order of magnitude for the effect of humidity was similar for the flight and environmental chamber studies. We conclude that admittance changes during flight were likely to have been caused by the low humidity in the aircraft cabin and that these changes may affect the automatic pressure regulation of the middle ear during descent. © 2013 Association for Research in Otolaryngology.
Resumo:
Radio-frequency identification technology (RFID) is a popular modern technology proven to deliver a range of value-added benefits to achieve system and operational efficiency, as well as cost-effectiveness. The operational characteristics of RFID outperform barcodes in many aspects. Despite its well-perceived benefits, a definite rationale for larger scale adoption is still not so promising. One of the key reasons is high implementation cost, especially the cost of tags for applications involving item-level tagging. This has resulted in the development of chipless RFID tags which cost much less than conventional chip-based tags. Despite the much lower tag cost, the uptake of chipless RFID system in the market is still not as widespread as predicted by RFID experts. This chapter explores the value-added applications of chipless RFID system to promote wider adoption. The chipless technology's technical and operational characteristics, benefits, limitations and current uses will also be examined. The merit of this chapter is to contribute fresh propositions to the promising applications of chipless RFID to increase its adoption in the industries that are currently not (or less popular in) utilising it, such as retail, logistics, manufacturing, healthcare, and service sectors. © 2013, IGI Global.