774 resultados para Career in Information Technology
Resumo:
Individuals who have sustained a traumatic brain injury (TBI) often complain of t roubl e sleeping and daytime fatigue but little is known about the neurophysiological underpinnings of the s e sleep difficulties. The fragile sleep of thos e with a TBI was predicted to be characterized by impairments in gating, hyperarousal and a breakdown in sleep homeostatic mechanisms. To test these hypotheses, 20 individuals with a TBI (18- 64 years old, 10 men) and 20 age-matched controls (18-61 years old, 9 men) took part in a comprehensive investigation of their sleep. While TBI participants were not recruited based on sleep complaint, the fmal sample was comprised of individuals with a variety of sleep complaints, across a range of injury severities. Rigorous screening procedures were used to reduce potential confounds (e.g., medication). Sleep and waking data were recorded with a 20-channel montage on three consecutive nights. Results showed dysregulation in sleep/wake mechanisms. The sleep of individuals with a TBI was less efficient than that of controls, as measured by sleep architecture variables. There was a clear breakdown in both spontaneous and evoked K-complexes in those with a TBI. Greater injury severities were associated with reductions in spindle density, though sleep spindles in slow wave sleep were longer for individuals with TBI than controls. Quantitative EEG revealed an impairment in sleep homeostatic mechanisms during sleep in the TBI group. As well, results showed the presence of hyper arousal based on quantitative EEG during sleep. In wakefulness, quantitative EEG showed a clear dissociation in arousal level between TBls with complaints of insomnia and TBls with daytime fatigue. In addition, ERPs indicated that the experience of hyper arousal in persons with a TBI was supported by neural evidence, particularly in wakefulness and Stage 2 sleep, and especially for those with insomnia symptoms. ERPs during sleep suggested that individuals with a TBI experienced impairments in information processing and sensory gating. Whereas neuropsychological testing and subjective data confirmed predicted deficits in the waking function of those with a TBI, particularly for those with more severe injuries, there were few group differences on laboratory computer-based tasks. Finally, the use of correlation analyses confirmed distinct sleep-wake relationships for each group. In sum, the mechanisms contributing to sleep disruption in TBI are particular to this condition, and unique neurobiological mechanisms predict the experience of insomnia versus daytime fatigue following a TBI. An understanding of how sleep becomes disrupted after a TBI is important to directing future research and neurorehabilitation.
Resumo:
Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.
Resumo:
This thesis "Entitled performance of district industries centres in kerala :An application of augmented solow model.The first chapter deals with evolution of approaches for promoting small scale production and the growth of small scale industries in india.the developing countries face the problems like sluggish growth capital shortages high levels of unemployment,enoromous rural-urban economic disparities regional inequalities increasing concentration of capital and chronic difficulities in the export sector.Review of literature and methodology of the study are presented in the second chapter. In the third chapter an attempt has been made to make an in-depth study of the emergence and growth of district of district industries centres.In the chapter four an attempt was made to study the organisational structure of DICs functions and responsibilities assigned to the functional managers and performance of the functionaries.
Resumo:
Scholarly communication over the past 10 to 15 years has gained a tremendous momentum with the advent of Internet and the World Wide Web. The web has transformed the ways by which people search, find, use and communicate information. Innovations in web technology since 2005 have brought out an array of new services and facilities and an enhanced version of the web named Web 2.0. Web 2.0 facilitates a collaborative environment in which the information users can interact with the information. Web 2.0 enables its users to create, annotate, review, share re-use and represent the information in new ways thereby optimizing the information dissemination
Resumo:
Marketing of information services is now an important goal of librarians all over the ~orld t? attract mor~ users to the libr~ry. thereby promoting user satisfaction. Marketing IS considered ~s ~n Integr~1 part of libraries and information centres mainly due to the developments In information t.echnology, information explosion, and declining library budgets. Kotler defines marketing as the "analysis, planning, implementation and control o~ carefully formulated programs designed to bring about voluntary exchanges of values WIth target markets fo~ the ~~rpos~ of~chievingorganizational objectives". Organizations suc.h as museums, unrversittes, libraries, and charities need to market their causes and their products to gain. political and social support as well as economic support (Kotler, 1995). In the marketing world people are now migrating from the traditional Four P ~lodelto th~ S~VE mode! (Alt~ns~n, 2013). According to the SAVE model, marketing III an orgarusanon must grve pnonty to 'Solutions' instead of the features or functions of~he.'Product" Similarily it is to focus on 'Access', instead of ,Place'; 'Value' instead of Price" so that the benefits are more stressed, rather than production costs. Instead of :Proliloti.on', marketi~g has. to conc~ntrate on 'Educating' the customers, providing lJlfo~mahol~ about t~eJr specific req~lrements, instead of relying on advertising, public rel~tlons, direct selling etc. From a library point ofview, to ensure maximum utilization of library ~ervices there is an increasing need for definite marketing plans to exploit the techn.ologlcal dcvelop",len~s so ~s to entice the users. By anticipating their changing needs and ~y co.mmulllcatl~g WIth them it should be possible to devise strategies to present various library services and products in a perceptive style.
Resumo:
The library professional in an academic institution has to anticipate the changing expectations of the users, and be flexible in adopting new skills and levels of awareness. Technology has drastically changed the way librarians define themselves and the way they think about their profession and the institutions they serve. In addition to the technical and professional skills, commitment to user centred services and skills for effective oral and written communication; they must have other skills, including business and management, teaching, leadership, etc. Eventually, library and information professionals in academic libraries need to update their knowledge and skills in Information and Communication Technology (ICT) as they play the role of key success factor in enabling the library to perform its role as an information support system for the society.
Resumo:
The paper aims to bring out the problems and prospects of the professional development opportunities of academic library professionals in the Universities in Kerala. The study is a part of research undertaken to survey the professional development activities and educational needs of library professionals in the major Universities of Kerala because of the developments in Information communication technology. The study recommends methods for improving the knowledge/skills of library professionals. The aim of the study is to evaluate the professional development activities of Library professionals and their attitude towards continuing education programmes. In order to achieve the objectives of the study a survey was conducted with the help of structured questionnaires distributed to 203 library professionals in seven major universities in Kerala, (South India) of which 185 questionnaires were returned. Results of the analysis show that majority of the professionals have pursued higher degrees in library science or IT allied courses after entering the profession, and that they have a positive attitude towards participation in training programmes and workshops. The results show that developments in ICT have a positive influence on majority of library professionals‘ attitude towards continuing education programmes.
Resumo:
The Central Library of Cochin University of Science and Technology (CUSAT) has been automated by proprietary software (Adlib Library) since 2000. After 11 years, in 2011, the university authorities decided to shift to an open source software (OSS), for integrated library management system (ILMS), Koha for automating the library housekeeping operations. In this context, this study attempts to share the experiences in cataloging with both type of software. The features of the cataloging modules of both the software are analysed on the badis of certain check points. It is found that the cataloging module of Koha is almost in par with that of proven proprietary software that has been in market for the past 25 years. Some suggestions made by this study may be incorporated for the further development and perfection of Koha.
Resumo:
Context awareness, dynamic reconfiguration at runtime and heterogeneity are key characteristics of future distributed systems, particularly in ubiquitous and mobile computing scenarios. The main contributions of this dissertation are theoretical as well as architectural concepts facilitating information exchange and fusion in heterogeneous and dynamic distributed environments. Our main focus is on bridging the heterogeneity issues and, at the same time, considering uncertain, imprecise and unreliable sensor information in information fusion and reasoning approaches. A domain ontology is used to establish a common vocabulary for the exchanged information. We thereby explicitly support different representations for the same kind of information and provide Inter-Representation Operations that convert between them. Special account is taken of the conversion of associated meta-data that express uncertainty and impreciseness. The Unscented Transformation, for example, is applied to propagate Gaussian normal distributions across highly non-linear Inter-Representation Operations. Uncertain sensor information is fused using the Dempster-Shafer Theory of Evidence as it allows explicit modelling of partial and complete ignorance. We also show how to incorporate the Dempster-Shafer Theory of Evidence into probabilistic reasoning schemes such as Hidden Markov Models in order to be able to consider the uncertainty of sensor information when deriving high-level information from low-level data. For all these concepts we provide architectural support as a guideline for developers of innovative information exchange and fusion infrastructures that are particularly targeted at heterogeneous dynamic environments. Two case studies serve as proof of concept. The first case study focuses on heterogeneous autonomous robots that have to spontaneously form a cooperative team in order to achieve a common goal. The second case study is concerned with an approach for user activity recognition which serves as baseline for a context-aware adaptive application. Both case studies demonstrate the viability and strengths of the proposed solution and emphasize that the Dempster-Shafer Theory of Evidence should be preferred to pure probability theory in applications involving non-linear Inter-Representation Operations.
Resumo:
Previous research has considered entrepreneurship as a way out of poverty and as a chance to foster economic growth. Moreover, specifically start-ups headed by women have played an important role in the economic development and it has been argued that gender-related issues, amongst others, play a significant role for the performance of a country or region. Against this background, this qualitative study explores desires, reluctances and constraints toward entrepreneurial activities of a comparably homogenous group of potential (poor) entrepreneurs in an emerging economy—cleaning ladies in Istanbul. We focus on this particular context as still rather little is known on reasons why women do not start a business (in Turkey). We believe exploring the reasons why certain individuals choose not to become entrepreneurs is at least as telling as investigating why they do so. We draw upon the social dimensions of entrepreneurship by Shapero and Sokol (1982) alongside Institutional Theory and posit that normative and cognitive forces may shape individual decisions on entrepreneurship. We identified two basic clusters of women and discuss possible hindrance factors undermining entrepreneurial desires and limitations for entrepreneurship as well as possible avenues for policy makers (and MNCs) to foster entrepreneurship in the given community.
Resumo:
We are currently at the cusp of a revolution in quantum technology that relies not just on the passive use of quantum effects, but on their active control. At the forefront of this revolution is the implementation of a quantum computer. Encoding information in quantum states as “qubits” allows to use entanglement and quantum superposition to perform calculations that are infeasible on classical computers. The fundamental challenge in the realization of quantum computers is to avoid decoherence – the loss of quantum properties – due to unwanted interaction with the environment. This thesis addresses the problem of implementing entangling two-qubit quantum gates that are robust with respect to both decoherence and classical noise. It covers three aspects: the use of efficient numerical tools for the simulation and optimal control of open and closed quantum systems, the role of advanced optimization functionals in facilitating robustness, and the application of these techniques to two of the leading implementations of quantum computation, trapped atoms and superconducting circuits. After a review of the theoretical and numerical foundations, the central part of the thesis starts with the idea of using ensemble optimization to achieve robustness with respect to both classical fluctuations in the system parameters, and decoherence. For the example of a controlled phasegate implemented with trapped Rydberg atoms, this approach is demonstrated to yield a gate that is at least one order of magnitude more robust than the best known analytic scheme. Moreover this robustness is maintained even for gate durations significantly shorter than those obtained in the analytic scheme. Superconducting circuits are a particularly promising architecture for the implementation of a quantum computer. Their flexibility is demonstrated by performing optimizations for both diagonal and non-diagonal quantum gates. In order to achieve robustness with respect to decoherence, it is essential to implement quantum gates in the shortest possible amount of time. This may be facilitated by using an optimization functional that targets an arbitrary perfect entangler, based on a geometric theory of two-qubit gates. For the example of superconducting qubits, it is shown that this approach leads to significantly shorter gate durations, higher fidelities, and faster convergence than the optimization towards specific two-qubit gates. Performing optimization in Liouville space in order to properly take into account decoherence poses significant numerical challenges, as the dimension scales quadratically compared to Hilbert space. However, it can be shown that for a unitary target, the optimization only requires propagation of at most three states, instead of a full basis of Liouville space. Both for the example of trapped Rydberg atoms, and for superconducting qubits, the successful optimization of quantum gates is demonstrated, at a significantly reduced numerical cost than was previously thought possible. Together, the results of this thesis point towards a comprehensive framework for the optimization of robust quantum gates, paving the way for the future realization of quantum computers.
Resumo:
This dissertation studies the effects of Information and Communication Technologies (ICT) on the banking sector and the payments system. It provides insight into how technology-induced changes occur, by exploring both the nature and scope of main technology innovations and evidencing their economic implications for banks and payment systems. Some parts in the dissertation are descriptive. They summarise the main technological developments in the field of finance and link them to economic policies. These parts are complemented with sections of the study that focus on assessing the extent of technology application to banking and payment activities. Finally, it includes also some work which borrows from the economic literature on banking. The need for an interdisciplinary approach arises from the complexity of the topic and the rapid path of change to which it is subject. The first chapter provides an overview of the influence of developments in ICT on the evolution of financial services and international capital flows. We include main indicators and discuss innovation in the financial sector, exchange rates and international capital flows. The chapter concludes with impact analysis and policy options regarding the international financial architecture, some monetary policy issues and the role of international institutions. The second chapter is a technology assessment study that focuses on the relationship between technology and money. The application of technology to payments systems is transforming the way we use money and, in some instances, is blurring the definition of what constitutes money. This chapter surveys the developments in electronic forms of payment and their relationship to the banking system. It also analyses the challenges posed by electronic money for regulators and policy makers, and in particular the opportunities created by two simultaneous processes: the Economic and Monetary Union and the increasing use of electronic payment instruments. The third chapter deals with the implications of developments in ICT on relationship banking. The financial intermediation literature explains relationship banking as a type of financial intermediation characterised by proprietary information and multiple interactions with customers. This form of banking is important for the financing of small and medium-sized enterprises. We discuss the effects of ICT on the banking sector as a whole and then apply these developments to the case of relationship banking. The fourth chapter is an empirical study of the effects of technology on the banking business, using a sample of data from the Spanish banking industry. The design of the study is based on some of the events described in the previous chapters, and also draws from the economic literature on banking. The study shows that developments in information management have differential effects on wholesale and retail banking activities. Finally, the last chapter is a technology assessment study on electronic payments systems in Spain and the European Union. It contains an analysis of existing payment systems and ongoing or planned initiatives in Spain. It forms part of a broader project comprising a series of country-specific analyses covering ten European countries. The main issues raised across the countries serve as the starting point to discuss implications of the development of electronic money for regulation and policies, and in particular, for monetary-policy making.
Resumo:
La tesis propone un marco de trabajo para el soporte de la toma de decisiones adecuado para soportar la ejecución distribuida de acciones cooperativas en entornos multi-agente dinámicos y complejos. Soporte para la toma de decisiones es un proceso que intenta mejorar la ejecución de la toma de decisiones en escenarios cooperativos. Este proceso ocurre continuamente en la vida diaria. Los humanos, por ejemplo, deben tomar decisiones acerca de que ropa usar, que comida comer, etc. En este sentido, un agente es definido como cualquier cosa que está situada en un entorno y que actúa, basado en su observación, su interpretación y su conocimiento acerca de su situación en tal entorno para lograr una acción en particular.Por lo tanto, para tomar decisiones, los agentes deben considerar el conocimiento que les permita ser consientes en que acciones pueden o no ejecutar. Aquí, tal proceso toma en cuenta tres parámetros de información con la intención de personificar a un agente en un entorno típicamente físico. Así, el mencionado conjunto de información es conocido como ejes de decisión, los cuales deben ser tomados por los agentes para decidir si pueden ejecutar correctamente una tarea propuesta por otro agente o humano. Los agentes, por lo tanto, pueden hacer mejores decisiones considerando y representando apropiadamente tal información. Los ejes de decisión, principalmente basados en: las condiciones ambientales, el conocimiento físico y el valor de confianza del agente, provee a los sistemas multi-agente un confiable razonamiento para alcanzar un factible y exitoso rendimiento cooperativo.Actualmente, muchos investigadores tienden a generar nuevos avances en la tecnología agente para incrementar la inteligencia, autonomía, comunicación y auto-adaptación en escenarios agentes típicamente abierto y distribuidos. En este sentido, esta investigación intenta contribuir en el desarrollo de un nuevo método que impacte tanto en las decisiones individuales como colectivas de los sistemas multi-agente. Por lo tanto, el marco de trabajo propuesto ha sido utilizado para implementar las acciones concretas involucradas en el campo de pruebas del fútbol robótico. Este campo emula los juegos de fútbol real, donde los agentes deben coordinarse, interactuar y cooperar entre ellos para solucionar tareas complejas dentro de un escenario dinámicamente cambiante y competitivo, tanto para manejar el diseño de los requerimientos involucrados en las tareas como para demostrar su efectividad en trabajos colectivos. Es así que los resultados obtenidos tanto en el simulador como en el campo real de experimentación, muestran que el marco de trabajo para el soporte de decisiones propuesto para agentes situados es capaz de mejorar la interacción y la comunicación, reflejando en un adecuad y confiable trabajo en equipo dentro de entornos impredecibles, dinámicos y competitivos. Además, los experimentos y resultados también muestran que la información seleccionada para generar los ejes de decisión para situar a los agentes, es útil cuando tales agentes deben ejecutar una acción o hacer un compromiso en cada momento con la intención de cumplir exitosamente un objetivo colectivo. Finalmente, algunas conclusiones enfatizando las ventajas y utilidades del trabajo propuesto en la mejora del rendimiento colectivo de los sistemas multi-agente en situaciones tales como tareas coordinadas y asignación de tareas son presentadas.
Resumo:
Mainframes, corporate and central servers are becoming information servers. The requirement for more powerful information servers is the best opportunity to exploit the potential of parallelism. ICL recognized the opportunity of the 'knowledge spectrum' namely to convert raw data into information and then into high grade knowledge. Parallel Processing and Data Management Its response to this and to the underlying search problems was to introduce the CAFS retrieval engine. The CAFS product demonstrates that it is possible to move functionality within an established architecture, introduce a different technology mix and exploit parallelism to achieve radically new levels of performance. CAFS also demonstrates the benefit of achieving this transparently behind existing interfaces. ICL is now working with Bull and Siemens to develop the information servers of the future by exploiting new technologies as available. The objective of the joint Esprit II European Declarative System project is to develop a smoothly scalable, highly parallel computer system, EDS. EDS will in the main be an SQL server and an information server. It will support the many data-intensive applications which the companies foresee; it will also support application-intensive and logic-intensive systems.