900 resultados para Information technologies and communications
Targeted! Population segmentation, electronic surveillance and governing the unemployed in Australia
Resumo:
Targeting is increasingly used to manage people. It operates by segmenting populations and providing different levels of opportunities and services to these groups. Each group is subject to different levels of surveillance and scrutiny. This article examines the deployment of targeting in Australian social security. Three case studies of targeting are presented in Australia's management of benefit overpayment and fraud, the distribution of employment services and the application of workfare. In conceptualizing surveillance as governance, the analysis examines the rationalities, technologies and practices that make targeting thinkable, practicable and achievable. In the case studies, targeting is variously conceptualized and justified by calculative risk discourses, moral discourses of obligation and notions of welfare dependency Advanced information technologies are also seen as particularly important in giving rise to the capacity to think about and act on population segments.
Resumo:
One of the most important advantages of database systems is that the underlying mathematics is rich enough to specify very complex operations with a small number of statements in the database language. This research covers an aspect of biological informatics that is the marriage of information technology and biology, involving the study of real-world phenomena using virtual plants derived from L-systems simulation. L-systems were introduced by Aristid Lindenmayer as a mathematical model of multicellular organisms. Not much consideration has been given to the problem of persistent storage for these simulations. Current procedures for querying data generated by L-systems for scientific experiments, simulations and measurements are also inadequate. To address these problems the research in this paper presents a generic process for data-modeling tools (L-DBM) between L-systems and database systems. This paper shows how L-system productions can be generically and automatically represented in database schemas and how a database can be populated from the L-system strings. This paper further describes the idea of pre-computing recursive structures in the data into derived attributes using compiler generation. A method to allow a correspondence between biologists' terms and compiler-generated terms in a biologist computing environment is supplied. Once the L-DBM gets any specific L-systems productions and its declarations, it can generate the specific schema for both simple correspondence terminology and also complex recursive structure data attributes and relationships.
Resumo:
In a Europe increasingly aging, it is now recognized the importance and potential of the service industry for ageing well based on information and communication technologies (ICT), as exemplified by the electronic market of social services and health care, the GuiMarket, proposed by the authors. However, this new range of services requires that individuals have advanced digital skills to fully participate in society. Based on the results of a survey made on a sample of 315 individuals, this paper discusses the importance granted GuiMarket and the intended frequency of use, concluding there is a close relationship between ICT access and use that respondents anticipate making of GuiMarket and alike services.
Resumo:
The use of Mobile and Wireless Information Technologies (MWIT) for provisioning public services by a government is a relatively recent phenomenon. This paper evaluates the results of MWIT adoption by IBGE (The Brazilian Institute of Geography and Statistics) through a case study. In 2007, IBGE applied 82,000 mobile devices (PDAs) for data gathering in a census operation in Brazil. A set of challenges for a large scale application of MWIT required intensive work involving innovative working practices and service goals. The case reveals a set of outputs of this process, such as time and cost reductions in service provision, improved information quality, staff training and increased organizational effectiveness and agility.
Resumo:
Today, information overload and the lack of systems that enable locating employees with the right knowledge or skills are common challenges that large organisations face. This makes knowledge workers to re-invent the wheel and have problems to retrieve information from both internal and external resources. In addition, information is dynamically changing and ownership of data is moving from corporations to the individuals. However, there is a set of web based tools that may cause a major progress in the way people collaborate and share their knowledge. This article aims to analyse the impact of ‘Web 2.0’ on organisational knowledge strategies. A comprehensive literature review was done to present the academic background followed by a review of current ‘Web 2.0’ technologies and assessment of their strengths and weaknesses. As the framework of this study is oriented to business applications, the characteristics of the involved segments and tools were reviewed from an organisational point of view. Moreover, the ‘Enterprise 2.0’ paradigm does not only imply tools but also changes the way people collaborate, the way the work is done (processes) and finally impacts on other technologies. Finally, gaps in the literature in this area are outlined.
Resumo:
Lifelong learning (LLL) has received increasing attention in recent years. It implies that learning should take place at all stages of the “life cycle and it should be life-wide, that is embedded in all life contexts from the school to the work place, the home and the community” (Green, 2002, p.613). The ‘learning society’, is the vision of a society where there are recognized opportunities for learning for every person, wherever they are and however old they happen to be. Globalization and the rise of new information technologies are some of the driving forces that cause depreciation of specialised competences. This happens very quickly in terms of economic value; consequently, workers of all skills levels, during their working life, must have the opportunity to update “their technical skills and enhance general skills to keep pace with continuous technological change and new job requirements” (Fahr, 2005, p. 75). It is in this context that LLL tops the policy agenda of international bodies, national governments and non-governmental organizations, in the field of education and training, to justify the need for LLL opportunities for the population as they face contemporary employability challenges. It is in this context that the requirement and interest to analyse the behaviour patterns of adult learners has developed over the last few years
Resumo:
An increasing amount of research is being developed in the area where technology and humans meet. The success or failure of technologies and the question whether technology helps humans to fulfill their goals or whether it hinders them is in most cases not a technical one. User Perception and Influencing Factors of Technology in Everyday Life addresses issues of human and technology interaction. The research in this work is interdisciplinary, ranging from more technical subjects such as computer science, engineering, and information systems, to non-technical descriptions of technology and human interaction from the point of view of sociology or philosophy. This book is perfect for academics, researchers, and professionals alike as it presents a set of theories that allow us to understand the interaction of technology and humans and to put it to practical use.
Resumo:
We describe a novel approach to explore DNA nucleotide sequence data, aiming to produce high-level categorical and structural information about the underlying chromosomes, genomes and species. The article starts by analyzing chromosomal data through histograms using fixed length DNA sequences. After creating the DNA-related histograms, a correlation between pairs of histograms is computed, producing a global correlation matrix. These data are then used as input to several data processing methods for information extraction and tabular/graphical output generation. A set of 18 species is processed and the extensive results reveal that the proposed method is able to generate significant and diversified outputs, in good accordance with current scientific knowledge in domains such as genomics and phylogenetics.
Resumo:
Proteins are biochemical entities consisting of one or more blocks typically folded in a 3D pattern. Each block (a polypeptide) is a single linear sequence of amino acids that are biochemically bonded together. The amino acid sequence in a protein is defined by the sequence of a gene or several genes encoded in the DNA-based genetic code. This genetic code typically uses twenty amino acids, but in certain organisms the genetic code can also include two other amino acids. After linking the amino acids during protein synthesis, each amino acid becomes a residue in a protein, which is then chemically modified, ultimately changing and defining the protein function. In this study, the authors analyze the amino acid sequence using alignment-free methods, aiming to identify structural patterns in sets of proteins and in the proteome, without any other previous assumptions. The paper starts by analyzing amino acid sequence data by means of histograms using fixed length amino acid words (tuples). After creating the initial relative frequency histograms, they are transformed and processed in order to generate quantitative results for information extraction and graphical visualization. Selected samples from two reference datasets are used, and results reveal that the proposed method is able to generate relevant outputs in accordance with current scientific knowledge in domains like protein sequence/proteome analysis.
Resumo:
Knowledge is central to the modern economy and society. Indeed, the knowledge society has transformed the concept of knowledge and is more and more aware of the need to overcome the lack of knowledge when has to make options or address its problems and dilemmas. One’s knowledge is less based on exact facts and more on hypotheses, perceptions or indications. Even when we use new computational artefacts and novel methodologies for problem solving, like the use of Group Decision Support Systems (GDSSs), the question of incomplete information is in most of the situations marginalized. On the other hand, common sense tells us that when a decision is made it is impossible to have a perception of all the information involved and the nature of its intrinsic quality. Therefore, something has to be made in terms of the information available and the process of its evaluation. It is under this framework that a Multi-valued Extended Logic Programming language will be used for knowledge representation and reasoning, leading to a model that embodies the Quality-of-Information (QoI) and its quantification, along the several stages of the decision-making process. In this way, it is possible to provide a measure of the value of the QoI that supports the decision itself. This model will be here presented in the context of a GDSS for VirtualECare, a system aimed at sustaining online healthcare services.
Resumo:
Magnetic resonance (MR) imaging has been used to analyse and evaluate the vocal tract shape through different techniques and with promising results in several fields. Our purpose is to demonstrate the relevance of MR and image processing for the vocal tract study. The extraction of contours of the air cavities allowed the set - up of a number of 3D reconstruction image stacks by means of the combination of orthogonally oriented sets of slices for e ach articulatory gesture, as a new approach to solve the expected spatial under sampling of the imaging process. In result these models give improved information for the visualization of morphologic and anatomical aspects and are useful for partial measure ments of the vocal tract shape in different situations. Potential use can be found in Medical and therapeutic applications as well as in acoustic articulatory speech modelling.
Resumo:
Na história da comunicação moderna, após o desenvolvimento da imprensa, o telégrafo desencadeou uma revolução nas comunicações da qual a Internet é a herdeira contemporânea. A reflexão sobre o telégrafo pode abrir perspectivas sobre as tendências, as possibilidades e os problemas colocados pela Internet. O telégrafo tem sido objecto de estudos que tendem a privilegiar sobretudo a história desta tecnologia, o contexto social e o seu significado institucional (ex. Thompson, 1947; Standage 2007 [1998]). James W. Carey, no seu ensaio “Technology and Ideology. The Case of the Telegraph”, propõe uma abordagem distinta. No telégrafo, vê o protótipo de muitos impérios comerciais de base científico-tecnológica que se lhe seguiram, um modelo pioneiro para a gestão de empresas complexas; um dos promotores da configuração nacional do mercado e de um sistema nacional de comunicações; e um catalisador de um pensamento futurista e utópico das tecnologias da informação. Tendo no horizonte a revolução das comunicações promovida pela Internet, o artigo revisita aquele ensaio seminal para explorar o alcance, mas também os problemas de uma perspectiva que concebe a inovação do telégrafo como uma metáfora para todas as inovações que anunciaram o período histórico da modernidade e que tem determinado até aos nossos dias as principais linhas de desenvolvimento das comunicações modernas.
Resumo:
As Tecnologias de Informação e Comunicação colocam em debate o modo de pensar e olhar o livro tradicional e originam novas materialidades para o texto e novas formas, espaços e géneros de leitura. A revolução originada por Gutenberg democratizou o acesso ao livro tradicional, alterando os processos de acesso ao conhecimento. As tecnologias eletrónicas trouxeram consigo uma nova revolução, virtualizando o acesso aos textos que alteraram a paisagem da cultura clássica do livro. Recorrendo à ideia original do investigador Nicholas Negroponte, pretende-se realizar uma análise das alterações que o livro sofreu, enquanto objeto de suporte ao texto, na sua passagem para contextos digitais ou na passagem de “átomos” para “bits”.
Resumo:
The discussion and analysis of the diverse outreach activities in this article provide guidance and suggestions for academic librarians who are interested in outreach and community engagement of any scale and nature. Cases are draw from a wide spectrum and are particularly strong in the setting of large academic libraries, special collections and programming for multicultural populations. The aim of this study is to present the results of research carried out regarding the needs, demand and consumption of European Union information by users in European Documentation Centres (EDC). A quantitative methodology was chosen based on a questionnaire with 24 items. This questionnaire was distributed within the EDC of Salamanca, Spain, and the EDC of Porto, Portugal, during specific time intervals between 2010 and 2011. We examined the level of EU information that EDC users possess, and identified the factors that facilitate or hinder access to EU information, the topics most demanded, and the types of documents consulted. Analysis was made of the use that the consumer of European information makes of databases and their behaviour during the consultation. Although the sample used was not very significant owing to its small size, it is a faithful reflection of the scarce visits made to EDCs. This study can be of use to managers of EDCs, providing them with better knowledge of the information needs and demands of their users. Ultimately this should lead to improvements in the services offered. The study lies within a frame of research scarcely addressed in specialized scholarly literature: European Union information.
Resumo:
In the history of modern communication, after the development of the printing press, the telegraph unleashed a revolution in communications. Today, Internet is in many ways its heir. Reflections on the telegraph may open up perspectives concerning tendencies, possibilities and pitfalls of the Internet. The telegraph has been well explored in important literature on communication and media which tends to emphasize the history of this technology, its social context and institutional meaning [e.g. Robert L. Thompson, 1947, Tom Standage, 2007 [1998]. James W. Carey, the North- American critical cultural studies' mentor, in his essay "Technology and Ideology. The Case of the Telegraph" (2009 [1983]), suggests a distinctive approach. In the telegraph, Carey sees the prototype of many subsequent commercial empires based on science and technology, a pioneer model for complex business management; an example of interest struggle for the patents control; an inductor of changes both in language and in structures of knowledge; and a promoter of a futurist and utopian thought of information technologies. Having in mind a revolution in communications promoted by the Internet, this paper revisits this seminal essay to explore its great attainment, as well as the problems of this kind of approach which conceives the innovation of the telegraph as a metaphor for all the innovations announcing the modern stage of history and determining still today the major lines of development in modern communication systems.