977 resultados para GUI legacy Windows Form web-application
Resumo:
The high hopes for rapid convergence of Eastern and Southern EU member states are increasingly being disappointed. With the onset of the Eurocrisis convergence has given way to divergence in the southern members, and many Eastern members have made little headway in closing the development gap. The EU´s performance compares unfavourably with East Asian success cases as well as with Western Europe´s own rapid catch-up to the USA after 1945. Historical experience indicates that successful catch up requires that less-developed economies to some extent are allowed to free-ride on an open international economic order. However, the EU´s model is based on the principle of a level-playing field, which militates against such a form of economic integration. The EU´s developmental model thus contrasts with the various strategies that have enabled successful catch up of industrial latecomers. Instead the EU´s current approach is more and more reminiscent of the relations between the pre-1945 European empires and their dependent territories. One reason for this unfortunate historical continuity is that the EU appears to have become entangled in its own myths. In the EU´s own interpretation, European integration is a peace project designed to overcome the almost continuous warfare that characterised the Westphalian system. As the sovereign state is identified as the root cause of all evil, any project to curtail its room of manoeuvre must ultimately benefit the common good. Yet, the existence of a Westphalian system of nation states is a myth. Empires and not states were the dominant actors in the international system for at least the last three centuries. If anything, the dawn of the age of the sovereign state in Western Europe occurred after 1945 with the disintegration of the colonial empires and thus historically coincided with the birth of European integration.
Resumo:
Mode of access: Internet.
Resumo:
Mode of access: Internet.
Resumo:
At head of title: State of Illinois. John A. Wieland, superintendent of public instruction.
Resumo:
"References" at end of some of the chapters.
Resumo:
The developments of models in Earth Sciences, e.g. for earthquake prediction and for the simulation of mantel convection, are fare from being finalized. Therefore there is a need for a modelling environment that allows scientist to implement and test new models in an easy but flexible way. After been verified, the models should be easy to apply within its scope, typically by setting input parameters through a GUI or web services. It should be possible to link certain parameters to external data sources, such as databases and other simulation codes. Moreover, as typically large-scale meshes have to be used to achieve appropriate resolutions, the computational efficiency of the underlying numerical methods is important. Conceptional this leads to a software system with three major layers: the application layer, the mathematical layer, and the numerical algorithm layer. The latter is implemented as a C/C++ library to solve a basic, computational intensive linear problem, such as a linear partial differential equation. The mathematical layer allows the model developer to define his model and to implement high level solution algorithms (e.g. Newton-Raphson scheme, Crank-Nicholson scheme) or choose these algorithms form an algorithm library. The kernels of the model are generic, typically linear, solvers provided through the numerical algorithm layer. Finally, to provide an easy-to-use application environment, a web interface is (semi-automatically) built to edit the XML input file for the modelling code. In the talk, we will discuss the advantages and disadvantages of this concept in more details. We will also present the modelling environment escript which is a prototype implementation toward such a software system in Python (see www.python.org). Key components of escript are the Data class and the PDE class. Objects of the Data class allow generating, holding, accessing, and manipulating data, in such a way that the actual, in the particular context best, representation is transparent to the user. They are also the key to establish connections with external data sources. PDE class objects are describing (linear) partial differential equation objects to be solved by a numerical library. The current implementation of escript has been linked to the finite element code Finley to solve general linear partial differential equations. We will give a few simple examples which will illustrate the usage escript. Moreover, we show the usage of escript together with Finley for the modelling of interacting fault systems and for the simulation of mantel convection.
Resumo:
Esta tese teve por objetivo saber como o corpo docente da Universidade Estadual de Mato Grosso do Sul (UEMS) percebe, entende e reage ante a incorporação e utilização das Tecnologias de Informação e Comunicação (TICs) nos cursos de graduação dessa Instituição, considerando os novos processos comunicacionais dialógicos que elas podem proporcionar na sociedade atual. Metodologicamente, a tese é composta por pesquisa bibliográfica, buscando fundamentar as áreas da Educação e Comunicação, assim como a Educomunicação; pesquisa documental para contextualização do lócus da pesquisa e de uma pesquisa exploratória a partir da aplicação de um questionário online a 165 docentes da UEMS, que responderam voluntariamente. Verificou-se que os professores utilizam as TICs cotidianamente nas atividades pessoais e, em menor escala, nos ambientes profissionais. Os desafios estão em se formar melhor esse docente e oferecer capacitação continuada para que utilizem de forma mais eficaz as TICs nas salas de aula. Destaca-se ainda que os avanços em tecnologia e os novos ecossistemas comunicacionais construíram novas e outras realidades, tornando a aprendizagem um fator não linear, exigindo-se revisão nos projetos pedagógicos na educação superior para que estes viabilizem diálogos propositivos entre a comunicação e a educação. A infraestrutura institucional para as TICs é outro entrave apontado, tanto na aquisição como na manutenção desses aparatos tecnológicos pela Universidade. Ao final, propõe-se realizar estudos e pesquisas que possam discutir alterações nos regimes contratuais de trabalho dos docentes, uma vez que, para atuar com as TICs de maneira apropriada, exige-se mais tempo e dedicação do docente.
Resumo:
We present a vision and a proposal for using Semantic Web technologies in the organic food industry. This is a very knowledge intensive industry at every step from the producer, to the caterer or restauranteur, through to the consumer. There is a crucial need for a concept of environmental audit which would allow the various stake holders to know the full environmental impact of their economic choices. This is a di?erent and parallel form of knowledge to that of price. Semantic Web technologies can be used e?ectively for the calculation and transfer of this type of knowledge (together with other forms of multimedia data) which could contribute considerably to the commercial and educational impact of the organic food industry. We outline how this could be achieved as our essential ob jective is to show how advanced technologies could be used to both reduce ecological impact and increase public awareness.
Resumo:
This thesis describes the development of a complete data visualisation system for large tabular databases, such as those commonly found in a business environment. A state-of-the-art 'cyberspace cell' data visualisation technique was investigated and a powerful visualisation system using it was implemented. Although allowing databases to be explored and conclusions drawn, it had several drawbacks, the majority of which were due to the three-dimensional nature of the visualisation. A novel two-dimensional generic visualisation system, known as MADEN, was then developed and implemented, based upon a 2-D matrix of 'density plots'. MADEN allows an entire high-dimensional database to be visualised in one window, while permitting close analysis in 'enlargement' windows. Selections of records can be made and examined, and dependencies between fields can be investigated in detail. MADEN was used as a tool for investigating and assessing many data processing algorithms, firstly data-reducing (clustering) methods, then dimensionality-reducing techniques. These included a new 'directed' form of principal components analysis, several novel applications of artificial neural networks, and discriminant analysis techniques which illustrated how groups within a database can be separated. To illustrate the power of the system, MADEN was used to explore customer databases from two financial institutions, resulting in a number of discoveries which would be of interest to a marketing manager. Finally, the database of results from the 1992 UK Research Assessment Exercise was analysed. Using MADEN allowed both universities and disciplines to be graphically compared, and supplied some startling revelations, including empirical evidence of the 'Oxbridge factor'.
Resumo:
Despite expectations being high, the industrial take-up of Semantic Web technologies in developing services and applications has been slower than expected. One of the main reasons is that many legacy systems have been developed without considering the potential of theWeb in integrating services and sharing resources.Without a systematic methodology and proper tool support, the migration from legacy systems to SemanticWeb Service-based systems can be a tedious and expensive process, which carries a significant risk of failure. There is an urgent need to provide strategies, allowing the migration of legacy systems to Semantic Web Services platforms, and also tools to support such strategies. In this paper we propose a methodology and its tool support for transitioning these applications to Semantic Web Services, which allow users to migrate their applications to Semantic Web Services platforms automatically or semi-automatically. The transition of the GATE system is used as a case study. © 2009 - IOS Press and the authors. All rights reserved.
Resumo:
In current organizations, valuable enterprise knowledge is often buried under rapidly expanding huge amount of unstructured information in the form of web pages, blogs, and other forms of human text communications. We present a novel unsupervised machine learning method called CORDER (COmmunity Relation Discovery by named Entity Recognition) to turn these unstructured data into structured information for knowledge management in these organizations. CORDER exploits named entity recognition and co-occurrence data to associate individuals in an organization with their expertise and associates. We discuss the problems associated with evaluating unsupervised learners and report our initial evaluation experiments in an expert evaluation, a quantitative benchmarking, and an application of CORDER in a social networking tool called BuddyFinder.
Resumo:
The Protein pKa Database (PPD) v1.0 provides a compendium of protein residue-specific ionization equilibria (pKa values), as collated from the primary literature, in the form of a web-accessible postgreSQL relational database. Ionizable residues play key roles in the molecular mechanisms that underlie many biological phenomena, including protein folding and enzyme catalysis. The PPD serves as a general protein pKa archive and as a source of data that allows for the development and improvement of pKa prediction systems. The database is accessed through an HTML interface, which offers two fast, efficient search methods: an amino acid-based query and a Basic Local Alignment Search Tool search. Entries also give details of experimental techniques and links to other key databases, such as National Center for Biotechnology Information and the Protein Data Bank, providing the user with considerable background information.
Resumo:
UncertWeb is a European research project running from 2010-2013 that will realize the uncertainty enabled model web. The assumption is that data services, in order to be useful, need to provide information about the accuracy or uncertainty of the data in a machine-readable form. Models taking these data as imput should understand this and propagate errors through model computations, and quantify and communicate errors or uncertainties generated by the model approximations. The project will develop technology to realize this and provide demonstration case studies.