896 resultados para pacs: information technology application
Resumo:
The delivery of products and services for construction-based businesses is increasingly becoming knowledge-driven and information-intensive. The proliferation of building information modelling (BIM) has increased business opportunities as well as introduced new challenges for the architectural, engineering and construction and facilities management (AEC/FM) industry. As such, the effective use, sharing and exchange of building life cycle information and knowledge management in building design, construction, maintenance and operation assumes a position of paramount importance. This paper identifies a subset of construction management (CM) relevant knowledge for different design conditions of building components through a critical, comprehensive review of synthesized literature and other information gathering and knowledge acquisition techniques. It then explores how such domain knowledge can be formalized as ontologies and, subsequently, a query vocabulary in order to equip BIM users with the capacity to query digital models of a building for the retrieval of useful and relevant domain-specific information. The formalized construction knowledge is validated through interviews with domain experts in relation to four case study projects. Additionally, retrospective analyses of several design conditions are used to demonstrate the soundness (realism), completeness, and appeal of the knowledge base and query-based reasoning approach in relation to the state-of-the-art tools, Solibri Model Checker and Navisworks. The knowledge engineering process and the methods applied in this research for information representation and retrieval could provide useful mechanisms to leverage BIM in support of a number of knowledge intensive CM/FM tasks and functions.
Resumo:
We address the issue of rate-distortion (R/D) performance optimality of the recently proposed switched split vector quantization (SSVQ) method. The distribution of the source is modeled using Gaussian mixture density and thus, the non-parametric SSVQ is analyzed in a parametric model based framework for achieving optimum R/D performance. Using high rate quantization theory, we derive the optimum bit allocation formulae for the intra-cluster split vector quantizer (SVQ) and the inter-cluster switching. For the wide-band speech line spectrum frequency (LSF) parameter quantization, it is shown that the Gaussian mixture model (GMM) based parametric SSVQ method provides 1 bit/vector advantage over the non-parametric SSVQ method.
Resumo:
The legality of the operation of Google’s search engine, and its liability as an Internet intermediary, has been tested in various jurisdictions on various grounds. In Australia, there was an ultimately unsuccessful case against Google under the Australian Consumer Law relating to how it presents results from its search engine. Despite this failed claim, several complex issues were not adequately addressed in the case including whether Google sufficiently distinguishes between the different parts of its search results page, so as not to mislead or deceive consumers. This article seeks to address this question of consumer confusion by drawing on empirical survey evidence of Australian consumers’ understanding of Google’s search results layout. This evidence, the first of its kind in Australia, indicates some level of consumer confusion. The implications for future legal proceedings in against Google in Australia and in other jurisdictions are discussed.
Resumo:
Window technique is one of the simplest methods to design Finite Impulse Response (FIR) filters. It uses special functions to truncate an infinite sequence to a finite one. In this paper, we propose window techniques based on integer sequences. The striking feature of the proposed work is that it overcomes all the problems posed by floating point numbers and inaccuracy, as the sequences are made of only integers. Some of these integer window sequences, yield sharp transition, while some of them result in zero ripple in passband and stopband.
Resumo:
"Trust and Collectives" is a compilation of articles: (I) "On Rational Trust" (in Meggle, G. (ed.) Social Facts & Collective Intentionality, Dr. Hänsel-Hohenhausen AG (currently Ontos), 2002), (II) "Simulating Rational Social Normative Trust, Predictive Trust, and Predictive Reliance Between Agents" (M.Tuomela and S. Hofmann, Ethics and Information Technology 5, 2003), (III) "A Collective's Trust in a Collective's action" (Protosociology, 18-19, 2003), and (IV) "Cooperation and Trust in Group Contexts" (R. Tuomela and M.Tuomela, Mind and Society 4/1, 2005 ). The articles are tied together by an introduction that dwells deeply on the topic of trust. (I) presents a somewhat general version of (RSNTR) and some basic arguments. (II) offers an application of (RSNTR) for a computer simulation of trust.(III) applies (RSNTR) to Raimo Tuomela's "we-mode"collectives (i.e. The Philosophy of Social Practices, Cambridge University Press, 2002). (IV) analyzes cooperation and trust in the context of acting as a member of a collective. Thus, (IV) elaborates on the topic of collective agency in (III) and puts the trust account (RSNTR) to work in a framework of cooperation. The central aim of this work is to construct a well-argued conceptual and theoretical account of rational trust, viz. a person's subjectively rational trust in another person vis-à-vis his performance of an action, seen from a first-person point of view. The main method is conceptual and theoretical analysis understood along the lines of reflective equilibrium. The account of rational social normative trust (RSNTR), which is argued and defended against other views, is the result of the quest. The introduction stands on its own legs as an argued presentation of an analysis of the concept of rational trust and an analysis of trust itself (RSNTR). It is claimed that (RSNTR) is "genuine" trust and embedded in a relationship of mutual respect for the rights of the other party. This relationship is the growing site for trust, a causal and conceptual ground, but it is not taken as a reason for trusting (viz. predictive "trust"). Relevant themes such as risk, decision, rationality, control, and cooperation are discussed and the topics of the articles are briefly presented. In this work it is argued that genuine trust is to be kept apart from predictive "trust." When we trust a person vis-à-vis his future action that concerns ourselves on the basis of his personal traits and/or features of the specific situation we have a prediction-like attitude. Genuine trust develops in a relationship of mutual respect for the mutual rights of the other party. Such a relationship is formed through interaction where the parties gradually find harmony concerning "the rules of the game." The trust account stands as a contribution to philosophical research on central social notions and it could be used as a theoretical model in social psychology, economical and political science where interaction between persons and groups are in focus. The analysis could also serve as a model for a trust component in computer simulation of human action. In the context of everyday life the account clarifies the difference between predictive "trust" and genuine trust. There are no fast shortcuts to trust. Experiences of mutual respect for mutual rights cannot be had unless there is respect.
Resumo:
Nanotechnology is a new technology which is generating a lot of interest among academicians, practitioners and scientists. Critical research is being carried out in this area all over the world.Governments are creating policy initiatives to promote developments it the nanoscale science and technology developments. Private investment is also seeing a rising trend. Large number of academic institutions and national laboratories has set up research centers that are workingon the multiple applications of nanotechnology. Wide ranges of applications are claimed for nanotechnology. This consists of materials, chemicals, textiles, semiconductors, to wonder drug delivery systems and diagnostics. Nanotechnology is considered to be a next big wave of technology after information technology and biotechnology. In fact, nanotechnology holds the promise of advances that exceed those achieved in recent decades in computers and biotechnology. Much interest in nanotechnology also could be because of the fact that enormous monetary benefits are expected from nanotechnology based products. According to NSF, revenues from nanotechnology could touch $ 1 trillion by 2015. However much of the benefits are projected ones. Realizing claimed benefits require successful development of nanoscience andv nanotechnology research efforts. That is the journey of invention to innovation has to be completed. For this to happen the technology has to flow from laboratory to market. Nanoscience and nanotechnology research efforts have to come out in the form of new products, new processes, and new platforms.India has also started its Nanoscience and Nanotechnology development program in under its 10(th) Five Year Plan and funds worth Rs. One billion have been allocated for Nanoscience and Nanotechnology Research and Development. The aim of the paper is to assess Nanoscience and Nanotechnology initiatives in India. We propose a conceptual model derived from theresource based view of the innovation. We have developed a structured questionnaire to measure the constructs in the conceptual model. Responses have been collected from 115 scientists and engineers working in the field of Nanoscience and Nanotechnology. The responses have been analyzed further by using Principal Component Analysis, Cluster Analysis and Regression Analysis.
Resumo:
The core aim of machine learning is to make a computer program learn from the experience. Learning from data is usually defined as a task of learning regularities or patterns in data in order to extract useful information, or to learn the underlying concept. An important sub-field of machine learning is called multi-view learning where the task is to learn from multiple data sets or views describing the same underlying concept. A typical example of such scenario would be to study a biological concept using several biological measurements like gene expression, protein expression and metabolic profiles, or to classify web pages based on their content and the contents of their hyperlinks. In this thesis, novel problem formulations and methods for multi-view learning are presented. The contributions include a linear data fusion approach during exploratory data analysis, a new measure to evaluate different kinds of representations for textual data, and an extension of multi-view learning for novel scenarios where the correspondence of samples in the different views or data sets is not known in advance. In order to infer the one-to-one correspondence of samples between two views, a novel concept of multi-view matching is proposed. The matching algorithm is completely data-driven and is demonstrated in several applications such as matching of metabolites between humans and mice, and matching of sentences between documents in two languages.
Resumo:
As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.
Resumo:
During the last few decades there have been far going financial market deregulation, technical development, advances in information technology, and standardization of legislation between countries. As a result, one can expect that financial markets have grown more interlinked. The proper understanding of the cross-market linkages has implications for investment and risk management, diversification, asset pricing, and regulation. The purpose of this research is to assess the degree of price, return, and volatility linkages between both geographic markets and asset categories within one country, Finland. Another purpose is to analyze risk asymmetries, i.e., the tendency of equity risk to be higher after negative events than after positive events of equal magnitude. The analysis is conducted both with respect to total risk (volatility), and systematic risk (beta). The thesis consists of an introductory part and four essays. The first essay studies to which extent international stock prices comove. The degree of comovements is low, indicating benefits from international diversification. The second essay examines the degree to which the Finnish market is linked to the “world market”. The total risk is divided into two parts, one relating to world factors, and one relating to domestic factors. The impact of world factors has increased over time. After 1993, when foreign investors were allowed to freely invest in Finnish assets, the risk level has been higher than previously. This was also the case during the economic recession in the beginning of the 1990’s. The third essay focuses on the stock, bond, and money markets in Finland. According to a trading model, the degree of volatility linkages should be strong. However, the results contradict this. The linkages are surprisingly weak, even negative. The stock market is the most independent, while the money market is affected by events on the two other markets. The fourth essay concentrates on volatility and beta asymmetries. Contrary to many international studies there are only few cases of risk asymmetries. When they occur, they tend to be driven by the market-wide component rather than the portfolio specific element.
Resumo:
This thesis explores the relationship between humans and ICTs (information and communication technologies). As ICTs are increasingly penetrating all spheres of social life, their role as mediators – between people, between people and information, and even between people and the natural world – is expanding, and they are increasingly shaping social life. Yet, we still know little of how our life is affected by their growing role. Our understanding of the actors and forces driving the accelerating adoption of new ICTs in all areas of life is also fairly limited. This thesis addresses these problems by interpretively exploring the link between ICTs and the shaping of society at home, in the office, and in the community. The thesis builds on empirical material gathered in three research projects, presented in four separate essays. The first project explores computerized office work through a case study. The second is a regional development project aiming at increasing ICT knowledge and use in 50 small-town families. In the third, the second project is compared to three other longitudinal development projects funded by the European Union. Using theories that consider the human-ICT relationship as intertwined, the thesis provides a multifaceted description of life with ICTs in contemporary information society. By oscillating between empirical and theoretical investigations and balancing between determinist and constructivist conceptualisations of the human-ICT relationship, I construct a dialectical theoretical framework that can be used for studying socio-technical contexts in society. This framework helps us see how societal change stems from the complex social processes that surround routine everyday actions. For example, interacting with and through ICTs may change individuals’ perceptions of time and space, social roles, and the proper ways to communicate – changes which at some point in time result in societal change in terms of, for example, new ways of acting and knowing things.
Resumo:
We present a distributed algorithm that finds a maximal edge packing in O(Δ + log* W) synchronous communication rounds in a weighted graph, independent of the number of nodes in the network; here Δ is the maximum degree of the graph and W is the maximum weight. As a direct application, we have a distributed 2-approximation algorithm for minimum-weight vertex cover, with the same running time. We also show how to find an f-approximation of minimum-weight set cover in O(f2k2 + fk log* W) rounds; here k is the maximum size of a subset in the set cover instance, f is the maximum frequency of an element, and W is the maximum weight of a subset. The algorithms are deterministic, and they can be applied in anonymous networks.
Resumo:
This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating–dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating–dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs – these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating–dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.
Resumo:
The worldwide research in nanoelectronics is motivated by the fact that scaling of MOSFETs by conventional top down approach will not continue for ever due to fundamental limits imposed by physics even if it is delayed for some more years. The research community in this domain has largely become multidisciplinary trying to discover novel transistor structures built with novel materials so that semiconductor industry can continue to follow its projected roadmap. However, setting up and running a nanoelectronics facility for research is hugely expensive. Therefore it is a common model to setup a central networked facility that can be shared with large number of users across the research community. The Centres for Excellence in Nanoelectronics (CEN) at Indian Institute of Science, Bangalore (IISc) and Indian Institute of Technology, Bombay (IITB) are such central networked facilities setup with funding of about USD 20 million from the Department of Information Technology (DIT), Ministry of Communications and Information Technology (MCIT), Government of India, in 2005. Indian Nanoelectronics Users Program (INUP) is a missionary program not only to spread awareness and provide training in nanoelectronics but also to provide easy access to the latest facilities at CEN in IISc and at IITB for the wider nanoelectronics research community in India. This program, also funded by MCIT, aims to train researchers by conducting workshops, hands-on training programs, and providing access to CEN facilities. This is a unique program aiming to expedite nanoelectronics research in the country, as the funding for projects required for projects proposed by researchers from around India has prior financial approval from the government and requires only technical approval by the IISc/ IITB team. This paper discusses the objectives of INUP, gives brief descriptions of CEN facilities, the training programs conducted by INUP and list various research activities currently under way in the program.
Resumo:
We propose a method to encode a 3D magnetic resonance image data and a decoder in such way that fast access to any 2D image is possible by decoding only the corresponding information from each subband image and thus provides minimum decoding time. This will be of immense use for medical community, because most of the PET and MRI data are volumetric data. Preprocessing is carried out at every level before wavelet transformation, to enable easier identification of coefficients from each subband image. Inclusion of special characters in the bit stream facilitates access to corresponding information from the encoded data. Results are taken by performing Daub4 along x (row), y (column) direction and Haar along z (slice) direction. Comparable results are achieved with the existing technique. In addition to that decoding time is reduced by 1.98 times. Arithmetic coding is used to encode corresponding information independently