39 resultados para Problems of Computer Intellectualization
Resumo:
The occipital lobe is one of the cortical areas most affected by the pathology of variant Creutzfeldt-Jakob disease (vCJD). To understand the visual problems of vCJD patients, neuropathological changes were studied in striate (B17, V1) and extrastriate (B18, V2) regions of the occipital cortex in eleven cases of vCJD. No differences in the density of vacuoles or surviving neurons were observed in B17 and B18 but densities of glial cell nuclei and deposits of the protease resistant form of prion protein (PrPsc) were greater in B18. The density of PrPsc deposits in B17 was positively correlated with their density in B18. The density of the diffuse PrPsc deposits in B17 was negatively correlated with the density of the surviving neurons in B18. In B17 and B18, the vacuoles either exhibited density peaks in laminae II/III and V/VI or were more uniformly distributed across the laminae. Diffuse PrPsc deposits were most frequent in laminae II/III and florid PrPsc deposits more generally distributed. In B18, the surviving neurons were more consistently bimodally distributed and the glial cell nuclei most abundant in laminae V/VI compared with B17. Hence, both striate and extrastriate areas of the occipital cortex are affected by the pathology of vCJD, the pathological changes being most severe in B18. Neuronal degeneration in B18 may be associated with the development of diffuse PrPsc deposits in B17. These data suggest that the short cortico-cortical connections between B17 and B18 and the pathways to subcortical visual areas are compromised in vCJD. Pathological changes in striate and extrastriate regions of the occipital cortex may contribute to several of the visual problems identified in patients with vCJD including oculomotor and visuo-spatial function.
Resumo:
Der vorliegende Beitrag untersucht die Frage, in welchem Maße sich Institutionen, die niederdeutsche Kulturszene und individuelle Sprecher des Niederdeutschen moderne Kommunikationstechnologien wie das Internet zunutze machen und ob computervermittelte Kommunikation helfen kann, dem Rückgang des Niederdeutschen Einhalt zu gebieten. Die grundsätzliche Herangehensweise ist eine soziolinguistische, die das Internet als sozialen Handlungsraum versteht, in dem Individuen und Institutionen kommunizieren. Für eine derartige Perspektive stehen weniger das Medium oder das Genre im Mittelpunkt des Interesses als vielmehr das kommunizierende Individuum und die Sprachgemeinschaft, in diesem Fall die virtuelle Sprachgemeinschaft. Based on studies that analyse the potential of computer-mediated communication (cmc) to help fight language shift in lesser-used languages, this paper discusses the situation of Low German in Northern Germany. Over the last three decades, Low German has lost more than half of its active speakers. The article raises the question of whether and, if so, how Low German speakers make use of cmc to stem this tide. Following a sociolinguistic approach focussed on the individual speakers who use the Internet as a space for social interaction, it gives an overview of the discursive field of Low German on the internet and analyses in detail the most popular Low German discussion board. It shows that one of the main obstacles to a more successful use of cmc can be found in speakers' complex attitude toward written Low German. © Franz Steiner Verlag Stuttgart.
Resumo:
The occipital lobe is one of the cortical areas most affected by the pathology of variant Creutzfeldt-Jakob disease (vCJD). To understand the visual problems of vCJD patients, neuropathological changes were studied in striate (B17, V1) and extrastriate (B18, V2) regions of the occipital cortex in eleven cases of vCJD. No differences in the density of vacuoles or surviving neurons were observed in B17 and B18 but densities of glial cell nuclei and deposits of the protease resistant form of prion protein (PrPsc) were greater in B18. The density of PrPsc deposits in B17 was positively correlated with their density in B18. The density of the diffuse PrPsc deposits in B17 was negatively correlated with the density of the surviving neurons in B18. In B17 and B18, the vacuoles either exhibited density peaks in laminae II/III and V/VI or were more uniformly distributed across the laminae. Diffuse PrPsc deposits were most frequent in laminae II/III and florid PrPsc deposits more generally distributed. In B18, the surviving neurons were more consistently bimodally distributed and the glial cell nuclei most abundant in laminae V/VI compared with B17. Hence, both striate and extrastriate areas of the occipital cortex are affected by the pathology of vCJD, the pathological changes being most severe in B18. Neuronal degeneration in B18 may be associated with the development of diffuse PrPsc deposits in B17. These data suggest that the short cortico-cortical connections between B17 and B18 and the pathways to subcortical visual areas are compromised in vCJD. Pathological changes in striate and extrastriate regions of the occipital cortex may contribute to several of the visual problems identified in patients with vCJD including oculomotor and visuo-spatial function. © 2012 Nova Science Publishers, Inc. All rights reserved.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
Research indicates that although students are the ultimate 'beneficiaries of Information and Communication Technology (ICT)-based' higher education learning their voices have been neglected in its development. This paper attempts to redress this imbalance by illuminating students' perceptions of the use of Computer Assisted Learning (CAL) in an undergraduate accounting module. The findings suggest that students are in favour of using EQL in a supportive role only. Interviewees rejected the idea of replacing human tutors with machine tutors and they believed that most of their learning occurs in tutorials and ranked these as the most important component of the module.
Resumo:
Although according to Angélil-Carter (2002: 2) ‘plagiarism is a modern Western concept which arose with the introduction of copyright laws in the Eighteenth century’, its avoidance is now a basic plank of respectable academic scholarship. Student plagiarism is currently a hot topic, at least for those who teach and study in British and American universities. There are companies selling both off-the-shelf and written-to-order term papers and others, like Turnitin.com, offering an electronic detection service. Recently an Australian Rector was dismissed for persistent plagiarism earlier in his career and most Anglo-American universities have warnings against and definitions of plagiarism on their websites – indeed Pennycook notes that in the mid-90s Stanford University's documents about plagiarism were reproduced by the University of Oregon apparently without attribution, and suggests, whimsically, that there is 'one set of standards for the guardians of truth and knowledge and another for those seeking entry' (1996: 213), (example and quote taken from Pecorari, 2002, p 29).
Resumo:
The Act that established the Greater London Authority (GLA) incorporated many of New Labour's aspirations for modern governance. Among those aspirations was the notion of policy integration, or 'joining up'. The Mayor of Greater London was required to develop a number of strategies, broadly in the planning and environmental policy domains, and to ensure that those strategies meshed into a coherent overall strategy for promoting London's economic, social and environmental well-being. How would this work in practice, given the need for coordination between the GLA and a number of related functional bodies, and given the political imperative for the GLA to make an impact quickly? Through our analysis of the strategy development and integration efforts of the GLA in its first nine months, we have gleaned new insights into the highly complex and difficult process of policy integration. We argue that the high aspirations of the Act for policy integration have not been met, policy integration instead being narrowly interpreted as the coordination of strategies to the Mayor's political agenda. Finally,we reflect on the likelihood of the GLA, as currently constituted, evolving to meet the functional requirement of policy integration.
Resumo:
Technology changes rapidly over years providing continuously more options for computer alternatives and making life easier for economic, intra-relation or any other transactions. However, the introduction of new technology “pushes” old Information and Communication Technology (ICT) products to non-use. E-waste is defined as the quantities of ICT products which are not in use and is bivariate function of the sold quantities, and the probability that specific computers quantity will be regarded as obsolete. In this paper, an e-waste generation model is presented, which is applied to the following regions: Western and Eastern Europe, Asia/Pacific, Japan/Australia/New Zealand, North and South America. Furthermore, cumulative computer sales were retrieved for selected countries of the regions so as to compute obsolete computer quantities. In order to provide robust results for the forecasted quantities, a selection of forecasting models, namely (i) Bass, (ii) Gompertz, (iii) Logistic, (iv) Trend model, (v) Level model, (vi) AutoRegressive Moving Average (ARMA), and (vii) Exponential Smoothing were applied, depicting for each country that model which would provide better results in terms of minimum error indices (Mean Absolute Error and Mean Square Error) for the in-sample estimation. As new technology does not diffuse in all the regions of the world with the same speed due to different socio-economic factors, the lifespan distribution, which provides the probability of a certain quantity of computers to be considered as obsolete, is not adequately modeled in the literature. The time horizon for the forecasted quantities is 2014-2030, while the results show a very sharp increase in the USA and United Kingdom, due to the fact of decreasing computer lifespan and increasing sales.
Resumo:
Smart cameras allow pre-processing of video data on the camera instead of sending it to a remote server for further analysis. Having a network of smart cameras allows various vision tasks to be processed in a distributed fashion. While cameras may have different tasks, we concentrate on distributed tracking in smart camera networks. This application introduces various highly interesting problems. Firstly, how can conflicting goals be satisfied such as cameras in the network try to track objects while also trying to keep communication overhead low? Secondly, how can cameras in the network self adapt in response to the behavior of objects and changes in scenarios, to ensure continued efficient performance? Thirdly, how can cameras organise themselves to improve the overall network's performance and efficiency? This paper presents a simulation environment, called CamSim, allowing distributed self-adaptation and self-organisation algorithms to be tested, without setting up a physical smart camera network. The simulation tool is written in Java and hence allows high portability between different operating systems. Relaxing various problems of computer vision and network communication enables a focus on implementing and testing new self-adaptation and self-organisation algorithms for cameras to use.
Resumo:
Immunoinformatics is an emergent branch of informatics science that long ago pullulated from the tree of knowledge that is bioinformatics. It is a discipline which applies informatic techniques to problems of the immune system. To a great extent, immunoinformatics is typified by epitope prediction methods. It has found disappointingly limited use in the design and discovery of new vaccines, which is an area where proper computational support is generally lacking. Most extant vaccines are not based around isolated epitopes but rather correspond to chemically-treated or attenuated whole pathogens or correspond to individual proteins extract from whole pathogens or correspond to complex carbohydrate. In this chapter we attempt to review what progress there has been in an as-yet-underexplored area of immunoinformatics: the computational discovery of whole protein antigens. The effective development of antigen prediction methods would significantly reduce the laboratory resource required to identify pathogenic proteins as candidate subunit vaccines. We begin our review by placing antigen prediction firmly into context, exploring the role of reverse vaccinology in the design and discovery of vaccines. We also highlight several competing yet ultimately complementary methodological approaches: sub-cellular location prediction, identifying antigens using sequence similarity, and the use of sophisticated statistical approaches for predicting the probability of antigen characteristics. We end by exploring how a systems immunomics approach to the prediction of immunogenicity would prove helpful in the prediction of antigens.
Resumo:
The finite element method is now well established among engineers as being an extremely useful tool in the analysis of problems with complicated boundary conditions. One aim of this thesis has been to produce a set of computer algorithms capable of efficiently analysing complex three dimensional structures. This set of algorithms has been designed to permit much versatility. Provisions such as the use of only those parts of the system which are relevant to a given analysis and the facility to extend the system by the addition of new elements are incorporate. Five element types have been programmed, these are, prismatic members, rectangular plates, triangular plates and curved plates. The 'in and out of plane' stiffness matrices for a curved plate element are derived using the finite element technique. The performance of this type of element is compared with two other theoretical solutions as well as with a set of independent experimental observations. Additional experimental work was then carried out by the author to further evaluate the acceptability of this element. Finally the analysis of two large civil engineering structures, the shell of an electrical precipitator and a concrete bridge, are presented to investigate the performance of the algorithms. Comparisons are made between the computer time, core store requirements and the accuracy of the analysis, for the proposed system and those of another program.
Resumo:
We propose and investigate an application of the method of fundamental solutions (MFS) to the radially symmetric and axisymmetric backward heat conduction problem (BHCP) in a solid or hollow cylinder. In the BHCP, the initial temperature is to be determined from the temperature measurements at a later time. This is an inverse and ill-posed problem, and we employ and generalize the MFS regularization approach [B.T. Johansson and D. Lesnic, A method of fundamental solutions for transient heat conduction, Eng. Anal. Boundary Elements 32 (2008), pp. 697–703] for the time-dependent heat equation to obtain a stable and accurate numerical approximation with small computational cost.
Resumo:
This work reports the developnent of a mathenatical model and distributed, multi variable computer-control for a pilot plant double-effect climbing-film evaporator. A distributed-parameter model of the plant has been developed and the time-domain model transformed into the Laplace domain. The model has been further transformed into an integral domain conforming to an algebraic ring of polynomials, to eliminate the transcendental terms which arise in the Laplace domain due to the distributed nature of the plant model. This has made possible the application of linear control theories to a set of linear-partial differential equations. The models obtained have well tracked the experimental results of the plant. A distributed-computer network has been interfaced with the plant to implement digital controllers in a hierarchical structure. A modern rnultivariable Wiener-Hopf controller has been applled to the plant model. The application has revealed a limitation condition that the plant matrix should be positive-definite along the infinite frequency axis. A new multi variable control theory has emerged fram this study, which avoids the above limitation. The controller has the structure of the modern Wiener-Hopf controller, but with a unique feature enabling a designer to specify the closed-loop poles in advance and to shape the sensitivity matrix as required. In this way, the method treats directly the interaction problems found in the chemical processes with good tracking and regulation performances. Though the ability of the analytical design methods to determine once and for all whether a given set of specifications can be met is one of its chief advantages over the conventional trial-and-error design procedures. However, one disadvantage that offsets to some degree the enormous advantages is the relatively complicated algebra that must be employed in working out all but the simplest problem. Mathematical algorithms and computer software have been developed to treat some of the mathematical operations defined over the integral domain, such as matrix fraction description, spectral factorization, the Bezout identity, and the general manipulation of polynomial matrices. Hence, the design problems of Wiener-Hopf type of controllers and other similar algebraic design methods can be easily solved.
Resumo:
Hard real-time systems are a class of computer control systems that must react to demands of their environment by providing `correct' and timely responses. Since these systems are increasingly being used in systems with safety implications, it is crucial that they are designed and developed to operate in a correct manner. This thesis is concerned with developing formal techniques that allow the specification, verification and design of hard real-time systems. Formal techniques for hard real-time systems must be capable of capturing the system's functional and performance requirements, and previous work has proposed a number of techniques which range from the mathematically intensive to those with some mathematical content. This thesis develops formal techniques that contain both an informal and a formal component because it is considered that the informality provides ease of understanding and the formality allows precise specification and verification. Specifically, the combination of Petri nets and temporal logic is considered for the specification and verification of hard real-time systems. Approaches that combine Petri nets and temporal logic by allowing a consistent translation between each formalism are examined. Previously, such techniques have been applied to the formal analysis of concurrent systems. This thesis adapts these techniques for use in the modelling, design and formal analysis of hard real-time systems. The techniques are applied to the problem of specifying a controller for a high-speed manufacturing system. It is shown that they can be used to prove liveness and safety properties, including qualitative aspects of system performance. The problem of verifying quantitative real-time properties is addressed by developing a further technique which combines the formalisms of timed Petri nets and real-time temporal logic. A unifying feature of these techniques is the common temporal description of the Petri net. A common problem with Petri net based techniques is the complexity problems associated with generating the reachability graph. This thesis addresses this problem by using concurrency sets to generate a partial reachability graph pertaining to a particular state. These sets also allows each state to be checked for the presence of inconsistencies and hazards. The problem of designing a controller for the high-speed manufacturing system is also considered. The approach adopted mvolves the use of a model-based controller: This type of controller uses the Petri net models developed, thus preservIng the properties already proven of the controller. It. also contains a model of the physical system which is synchronised to the real application to provide timely responses. The various way of forming the synchronization between these processes is considered and the resulting nets are analysed using concurrency sets.