958 resultados para Information Dissemination


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Internet of Things (IoT) can be defined as a “network of networks” composed by billions of uniquely identified physical Smart Objects (SO), organized in an Internet-like structure. Smart Objects can be items equipped with sensors, consumer devices (e.g., smartphones, tablets, or wearable devices), and enterprise assets that are connected both to the Internet and to each others. The birth of the IoT, with its communications paradigms, can be considered as an enabling factor for the creation of the so-called Smart Cities. A Smart City uses Information and Communication Technologies (ICT) to enhance quality, performance and interactivity of urban services, ranging from traffic management and pollution monitoring to government services and energy management. This thesis is focused on multi-hop data dissemination within IoT and Smart Cities scenarios. The proposed multi-hop techniques, mostly based on probabilistic forwarding, have been used for different purposes: from the improvement of the performance of unicast protocols for Wireless Sensor Networks (WSNs) to the efficient data dissemination within Vehicular Ad-hoc NETworks (VANETs).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The role of information in high-technology markets is critical (Dutta, Narasimhan and Rajiv 1999; Farrell and Saloner 1986; Weiss and Heide 1993). In these markets, the volatility and volume of information present managers and researchers with the considerable challenge of monitoring such information and examining how potential customers may respond to it. This article examines the effects of the type and volume of information on the market share of different technological standards in the Local Area Networks (LAN) industry. We identify three different types of information: technological, availability and adoption. Our empirical application suggests that all three types of information have significant effects on the market share of a technological standard, but their direction and magnitude differ. More specifically, technology-related information is negatively related to market share as it demonstrates that the underlying technology is immature and still evolving. Both availability and adoption-related information have a positive effect on market share, but the former is larger than the latter. We conclude that high-tech firms should emphasize the dissemination of information, especially availability-related, as part of their promotional strategy for a new technology. Otherwise, they may risk missing an opportunity to achieve a higher share and establish their market presence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Initially this thesis examines the various mechanisms by which technology is acquired within anodizing plants. In so doing the history of the evolution of anodizing technology is recorded, with particular reference to the growth of major markets and to the contribution of the marketing efforts of the aluminium industry. The business economics of various types of anodizing plants are analyzed. Consideration is also given to the impact of developments in anodizing technology on production economics and market growth. The economic costs associated with work rejected for process defects are considered. Recent changes in the industry have created conditions whereby information technology has a potentially important role to play in retaining existing knowledge. One such contribution is exemplified by the expert system which has been developed for the identification of anodizing process defects. Instead of using a "rule-based" expert system, a commercial neural networks program has been adapted for the task. The advantages of neural networks over 'rule-based' systems is that they are better suited to production problems, since the actual conditions prevailing when the defect was produced are often not known with certainty. In using the expert system, the user first identifies the process stage at which the defect probably occurred and is then directed to a file enabling the actual defects to be identified. After making this identification, the user can consult a database which gives a more detailed description of the defect, advises on remedial action and provides a bibliography of papers relating to the defect. The database uses a proprietary hypertext program, which also provides rapid cross-referencing to similar types of defect. Additionally, a graphics file can be accessed which (where appropriate) will display a graphic of the defect on screen. A total of 117 defects are included, together with 221 literature references, supplemented by 48 cross-reference hyperlinks. The main text of the thesis contains 179 literature references. (DX186565)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ensuring the security of corporate information, that is increasingly stored, processed and disseminated using information and communications technologies [ICTs], has become an extremely complex and challenging activity. This is a particularly important concern for knowledge-intensive organisations, such as universities, as the effective conduct of their core teaching and research activities is becoming ever more reliant on the availability, integrity and accuracy of computer-based information resources. One increasingly important mechanism for reducing the occurrence of security breaches, and in so doing, protecting corporate information, is through the formulation and application of a formal information security policy (InSPy). Whilst a great deal has now been written about the importance and role of the information security policy, and approaches to its formulation and dissemination, there is relatively little empirical material that explicitly addresses the structure or content of security policies. The broad aim of the study, reported in this paper, is to fill this gap in the literature by critically examining the structure and content of authentic information security policies, rather than simply making general prescriptions about what they ought to contain. Having established the structure and key features of the reviewed policies, the paper critically explores the underlying conceptualisation of information security embedded in the policies. There are two important conclusions to be drawn from this study: (1) the wide diversity of disparate policies and standards in use is unlikely to foster a coherent approach to security management; and (2) the range of specific issues explicitly covered in university policies is surprisingly low, and reflects a highly techno-centric view of information security management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The impact of ICT (information and communications technology) on the logistics service industry is reshaping its organisation and structure. Within this process, the nature of changes resulting from ICT dissemination in small 3PLs (third party logistics providers) is still unclear, although a large number of logistics service markets, especially in the EU context, are populated by a high number of small 3PLs. In addition, there is still a gap in the literature where the role of technological capability in small 3PLs is seriously underestimated. This gives rise to the need to develop investigation in this area. The paper presents the preliminary results of a case study analysis on ICT usage in a sample of 7 small Italian 3PLs. The results highlight some of the barriers to effective ICT implementation, as well as some of the critical success factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While openness is well applied to software development and exploitation (open sources), and successfully applied to new business models (open innovation), fundamental and applied research seems to lag behind. Even after decades of advocacy, in 2011 only 50% of the public-funded research was freely available and accessible (Archambault et al., 2013). The current research workflows, stemming from a pre-internet age, result in loss of opportunity not only for the researchers themselves (cf. extensive literature on topic at Open Access citation project, http://opcit.eprints.org/), but also slows down innovation and application of research results (Houghton & Swan, 2011). Recent studies continue to suggest that lack of awareness among researchers, rather than lack of e-infrastructure and methodology, is a key reason for this loss of opportunity (Graziotin 2014). The session will focus on why Open Science is ideally suited to achieving tenure-relevant researcher impact in a “Publish or Perish” reality. Open Science encapsulates tools and approaches for each step along the research cycle: from Open Notebook Science to Open Data, Open Access, all setting up researchers for capitalising on social media in order to promote and discuss, and establish unexpected collaborations. Incorporating these new approaches into a updated personal research workflow is of strategic beneficial for young researchers, and will prepare them for expected long term funder trends towards greater openness and demand for greater return on investment (ROI) for public funds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Biologists often need to assess whether unfamiliar datasets warrant the time investment required for more detailed exploration. Basing such assessments on brief descriptions provided by data publishers is unwieldy for large datasets that contain insights dependent on specific scientific questions. Alternatively, using complex software systems for a preliminary analysis may be deemed as too time consuming in itself, especially for unfamiliar data types and formats. This may lead to wasted analysis time and discarding of potentially useful data. Results: We present an exploration of design opportunities that the Google Maps interface offers to biomedical data visualization. In particular, we focus on synergies between visualization techniques and Google Maps that facilitate the development of biological visualizations which have both low-overhead and sufficient expressivity to support the exploration of data at multiple scales. The methods we explore rely on displaying pre-rendered visualizations of biological data in browsers, with sparse yet powerful interactions, by using the Google Maps API. We structure our discussion around five visualizations: a gene co-regulation visualization, a heatmap viewer, a genome browser, a protein interaction network, and a planar visualization of white matter in the brain. Feedback from collaborative work with domain experts suggests that our Google Maps visualizations offer multiple, scale-dependent perspectives and can be particularly helpful for unfamiliar datasets due to their accessibility. We also find that users, particularly those less experienced with computer use, are attracted by the familiarity of the Google Maps API. Our five implementations introduce design elements that can benefit visualization developers. Conclusions: We describe a low-overhead approach that lets biologists access readily analyzed views of unfamiliar scientific datasets. We rely on pre-computed visualizations prepared by data experts, accompanied by sparse and intuitive interactions, and distributed via the familiar Google Maps framework. Our contributions are an evaluation demonstrating the validity and opportunities of this approach, a set of design guidelines benefiting those wanting to create such visualizations, and five concrete example visualizations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The next generation of vehicles will be equipped with automated Accident Warning Systems (AWSs) capable of warning neighbouring vehicles about hazards that might lead to accidents. The key enabling technology for these systems is the Vehicular Ad-hoc Networks (VANET) but the dynamics of such networks make the crucial timely delivery of warning messages challenging. While most previously attempted implementations have used broadcast-based data dissemination schemes, these do not cope well as data traffic load or network density increases. This problem of sending warning messages in a timely manner is addressed by employing a network coding technique in this thesis. The proposed NETwork COded DissEmination (NETCODE) is a VANET-based AWS responsible for generating and sending warnings to the vehicles on the road. NETCODE offers an XOR-based data dissemination scheme that sends multiple warning in a single transmission and therefore, reduces the total number of transmissions required to send the same number of warnings that broadcast schemes send. Hence, it reduces contention and collisions in the network improving the delivery time of the warnings. The first part of this research (Chapters 3 and 4) asserts that in order to build a warning system, it is needful to ascertain the system requirements, information to be exchanged, and protocols best suited for communication between vehicles. Therefore, a study of these factors along with a review of existing proposals identifying their strength and weakness is carried out. Then an analysis of existing broadcast-based warning is conducted which concludes that although this is the most straightforward scheme, loading can result an effective collapse, resulting in unacceptably long transmission delays. The second part of this research (Chapter 5) proposes the NETCODE design, including the main contribution of this thesis, a pair of encoding and decoding algorithms that makes the use of an XOR-based technique to reduce transmission overheads and thus allows warnings to get delivered in time. The final part of this research (Chapters 6--8) evaluates the performance of the proposed scheme as to how it reduces the number of transmissions in the network in response to growing data traffic load and network density and investigates its capacity to detect potential accidents. The evaluations use a custom-built simulator to model real-world scenarios such as city areas, junctions, roundabouts, motorways and so on. The study shows that the reduction in the number of transmissions helps reduce competition in the network significantly and this allows vehicles to deliver warning messages more rapidly to their neighbours. It also examines the relative performance of NETCODE when handling both sudden event-driven and longer-term periodic messages in diverse scenarios under stress caused by increasing numbers of vehicles and transmissions per vehicle. This work confirms the thesis' primary contention that XOR-based network coding provides a potential solution on which a more efficient AWS data dissemination scheme can be built.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The text came from the UNESP-Project in Partnership with the Public Administration: City of Echaporã ‖, a multidisciplinary project, interdepartmental and that results from a term partnership signed between the State University / Campus Marilia, the Regional Office of Articulation Planning and the Municipality of Echaporã.Given the serious social problems diagnosed in this county Administrative Region of Marilia, somaramse forces and, since April 2002, Echaporã account with the performance of a design matrix that involves the community in six (6) subprojects, among which a which emphasizes the dissemination of information (the public library as a center of information and knowledge irradiator for urban and rural areas, seeking to enter the Society daInformação). By their nature, the project-matrix is considered open and can accommodate new subprojects, where they concern the problems identified in the initial diagnosis. For its validity, each subproject has its own methodology, some innovative and will be subject to further systematization and dissemination, however, after a few months of deployment, the results show the correctness of community involvement (being representative) in all discussions and steps of research, and activities developed, widely disseminated to the target audience. The membership of the community, the leaders and the authorities can be considered a good barometer of the actions carried out in Echaporã and evidence of change in information culture that is already noticeable in the city, setting the socio-cultural dynamics of the same, in terms a new public policy to be strengthened with the participation of specialists in this specific area, in direct work with local managers, in this case, specific examples relating to the strength of information in the process of change in small municipalities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This chapter discusses the consequences of open-access (OA) publishing and dissemination for libraries in higher education institutions (HEIs). Key questions (which are addressed in this chapter) include: 1. How might OA help information provision? 2. What changes to library services will arise from OA developments (particularly if OA becomes widespread)? 3. How do these changes fit in with wider changes affecting the future role of libraries? 4. How can libraries and librarians help to address key practical issues associated with the implementation of OA (particularly transition issues)? This chapter will look at OA from the perspective of HE libraries and will make four key points: 1. Open access has the potential to bring benefits to the research community in particular and society in general by improving information provision. 2. If there is widespread open access to research content, there will be less need for library-based activity at the institution level, and more need for information management activity at the supra-institutional or national level. 3. Institutional libraries will, however, continue to have an important role to play in areas such as managing purchased or licensed content, curating institutional digital assets, and providing support in the use of content for teaching and research. 4. Libraries are well-placed to work with stakeholders within their institutions and beyond to help resolve current challenges associated with the implementation of OA policies and practices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research develops an econometric framework to analyze time series processes with bounds. The framework is general enough that it can incorporate several different kinds of bounding information that constrain continuous-time stochastic processes between discretely-sampled observations. It applies to situations in which the process is known to remain within an interval between observations, by way of either a known constraint or through the observation of extreme realizations of the process. The main statistical technique employs the theory of maximum likelihood estimation. This approach leads to the development of the asymptotic distribution theory for the estimation of the parameters in bounded diffusion models. The results of this analysis present several implications for empirical research. The advantages are realized in the form of efficiency gains, bias reduction and in the flexibility of model specification. A bias arises in the presence of bounding information that is ignored, while it is mitigated within this framework. An efficiency gain arises, in the sense that the statistical methods make use of conditioning information, as revealed by the bounds. Further, the specification of an econometric model can be uncoupled from the restriction to the bounds, leaving the researcher free to model the process near the bound in a way that avoids bias from misspecification. One byproduct of the improvements in model specification is that the more precise model estimation exposes other sources of misspecification. Some processes reveal themselves to be unlikely candidates for a given diffusion model, once the observations are analyzed in combination with the bounding information. A closer inspection of the theoretical foundation behind diffusion models leads to a more general specification of the model. This approach is used to produce a set of algorithms to make the model computationally feasible and more widely applicable. Finally, the modeling framework is applied to a series of interest rates, which, for several years, have been constrained by the lower bound of zero. The estimates from a series of diffusion models suggest a substantial difference in estimation results between models that ignore bounds and the framework that takes bounding information into consideration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Developments in information technology will drive the change in records management; however, it should be the health information managers who drive the information management change. The role of health information management will be challenged to use information technology to broker a range of requests for information from a variety of users, including he alth consumers. The purposes of this paper are to conceptualise the role of health information management in the context of a technologically driven and managed health care environment, and to demonstrat e how this framework has been used to review and develop the undergraduate program in health information management at the Queensland University of Technology.