896 resultados para Infrastructures linéaires
Resumo:
A large and complex IT project may involve multiple organizations and be constrained within a temporal period. An organization is a system comprising of people, activities, processes, information, resources and goals. Understanding and modelling such a project and its interrelationship with relevant organizations are essential for organizational project planning. This paper introduces the problem articulation method (PAM) as a semiotic method for organizational infrastructure modelling. PAM offers a suite of techniques, which enables the articulation of the business, technical and organizational requirements, delivering an infrastructural framework to support the organization. It works by eliciting and formalizing (e. g. processes, activities, relationships, responsibilities, communications, resources, agents, dependencies and constraints) and mapping these abstractions to represent the manifestation of the "actual" organization. Many analysts forgo organizational modelling methods and use localized ad hoc and point solutions, but this is not amenable for organizational infrastructures modelling. A case study of the infrared atmospheric sounding interferometer (IASI) will be used to demonstrate the applicability of PAM, and to examine its relevancy and significance in dealing with the innovation and changes in the organizations.
Resumo:
As integrated software solutions reshape project delivery, they alter the bases for collaboration and competition across firms in complex industries. This paper synthesises and extends literatures on strategy in project-based industries and digitally-integrated work to understand how project-based firms interact with digital infrastructures for project delivery. Four identified strategies are to: 1) develop and use capabilities to shape the integrated software solutions that are used in projects; 2) co-specialize, developing complementary assets to work repeatedly with a particular integrator firm; 3) retain flexibility by developing and maintaining capabilities in multiple digital technologies and processes; and 4) manage interfaces, translating work into project formats for coordination while hiding proprietary data and capabilities in internal systems. The paper articulates the strategic importance of digital infrastructures for delivery as well as product architectures. It concludes by discussing managerial implications of the identified strategies and areas for further research.
Resumo:
Le filtrage de Bucy-Kalman s'applique au modèle d'état comprenant des équations linéaires bruitées, décrivant l'évolution de l'état et des équations linéaires bruitées d'observation . Ce filtrage consiste dans le cas gaussien, à calculer de façon récursive, la loi de probabilité, a posteriori, de l'état, au vu de l' observation actuelle et des observations passées . Le filtrage par densités approchées permet de traiter des équations d'état, non linéaires ou à bruits non Gaussiens. Pour un coefficient de rappel aléatoire, cas typique d'une situation de changements de modèles, l'article introduit une famille de lois de probabilité, paramétrées, bimodales servant, par ajustement des paramètres, à approcher les lois a posteriori de l'état aux divers instants . Les paramètres sont recalculés récursivement, lors des mises à jour et des prédictions.
Resumo:
Over the last few years, load growth, increases in intermittent generation, declining technology costs and increasing recognition of the importance of customer behaviour in energy markets have brought about a change in the focus of Demand Response (DR) in Europe. The long standing programmes involving large industries, through interruptible tariffs and time of day pricing, have been increasingly complemented by programmes aimed at commercial and residential customer groups. Developments in DR vary substantially across Europe reflecting national conditions and triggered by different sets of policies, programmes and implementation schemes. This paper examines experiences within European countries as well as at European Union (EU) level, with the aim of understanding which factors have facilitated or impeded advances in DR. It describes initiatives, studies and policies of various European countries, with in-depth case studies of the UK, Italy and Spain. It is concluded that while business programmes, technical and economic potentials vary across Europe, there are common reasons as to why coordinated DR policies have been slow to emerge. This is because of the limited knowledge on DR energy saving capacities; high cost estimates for DR technologies and infrastructures; and policies focused on creating the conditions for liberalising the EU energy markets.
Resumo:
Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.
Resumo:
Data quality is a difficult notion to define precisely, and different communities have different views and understandings of the subject. This causes confusion, a lack of harmonization of data across communities and omission of vital quality information. For some existing data infrastructures, data quality standards cannot address the problem adequately and cannot fulfil all user needs or cover all concepts of data quality. In this study, we discuss some philosophical issues on data quality. We identify actual user needs on data quality, review existing standards and specifications on data quality, and propose an integrated model for data quality in the field of Earth observation (EO). We also propose a practical mechanism for applying the integrated quality information model to a large number of datasets through metadata inheritance. While our data quality management approach is in the domain of EO, we believe that the ideas and methodologies for data quality management can be applied to wider domains and disciplines to facilitate quality-enabled scientific research.
Resumo:
How can organizations use digital infrastructure to realise physical outcomes? The design and construction of London Heathrow Terminal 5 is analysed to build new theoretical understanding of visualization and materialization practices in the transition from digital design to physical realisation. In the project studied, an integrated software solution is introduced as an infrastructure for delivery. The analyses articulate the work done to maintain this digital infrastructure and also to move designs beyond the closed world of the computer to a physical reality. In changing medium, engineers use heterogeneous trials to interrogate and address the limitations of an integrated digital model. The paper explains why such trials, which involve the reconciliation of digital and physical data through parallel and iterative forms of work, provide a robust practice for realizing goals that have physical outcomes. It argues that this practice is temporally different from, and at times in conflict with, building a comprehensive dataset within the digital medium. The paper concludes by discussing the implications for organizations that use digital infrastructures in seeking to accomplish goals in digital and physical media.
Resumo:
We describe ncWMS, an implementation of the Open Geospatial Consortium’s Web Map Service (WMS) specification for multidimensional gridded environmental data. ncWMS can read data in a large number of common scientific data formats – notably the NetCDF format with the Climate and Forecast conventions – then efficiently generate map imagery in thousands of different coordinate reference systems. It is designed to require minimal configuration from the system administrator and, when used in conjunction with a suitable client tool, provides end users with an interactive means for visualizing data without the need to download large files or interpret complex metadata. It is also used as a “bridging” tool providing interoperability between the environmental science community and users of geographic information systems. ncWMS implements a number of extensions to the WMS standard in order to fulfil some common scientific requirements, including the ability to generate plots representing timeseries and vertical sections. We discuss these extensions and their impact upon present and future interoperability. We discuss the conceptual mapping between the WMS data model and the data models used by gridded data formats, highlighting areas in which the mapping is incomplete or ambiguous. We discuss the architecture of the system and particular technical innovations of note, including the algorithms used for fast data reading and image generation. ncWMS has been widely adopted within the environmental data community and we discuss some of the ways in which the software is integrated within data infrastructures and portals.
Resumo:
The likelihood that continuing greenhouse-gas emissions will lead to an unmanageable degree of climate change [1] has stimulated the search for planetary-scale technological solutions for reducing global warming [2] (“geoengineering”), typically characterized by the necessity for costly new infrastructures and industries [3]. We suggest that the existing global infrastructure associated with arable agriculture can help, given that crop plants exert an important influence over the climatic energy budget 4 and 5 because of differences in their albedo (solar reflectivity) compared to soils and to natural vegetation [6]. Specifically, we propose a “bio-geoengineering” approach to mitigate surface warming, in which crop varieties having specific leaf glossiness and/or canopy morphological traits are specifically chosen to maximize solar reflectivity. We quantify this by modifying the canopy albedo of vegetation in prescribed cropland areas in a global-climate model, and thereby estimate the near-term potential for bio-geoengineering to be a summertime cooling of more than 1°C throughout much of central North America and midlatitude Eurasia, equivalent to seasonally offsetting approximately one-fifth of regional warming due to doubling of atmospheric CO2[7]. Ultimately, genetic modification of plant leaf waxes or canopy structure could achieve greater temperature reductions, although better characterization of existing intraspecies variability is needed first.
Resumo:
There is an increasing interest in integrating Java-based, and in particular Jini systems, with the emerging Grid infrastructures. In this paper we explore various ways of integrating the key components of each architecture, their directory and information management services. In the first part of the paper we sketch out the Jini and Grid architectures and their services. We then review the components and services that Jini provides and compare these with those of the Grid. In the second part of the paper we critically explore four ways that Jini and the Grid could interact, here in particular we look at possible scenarios that can provide a seamless interface to a Jini environment for Grid clients and how to use Jini services from a Grid environment. In the final part of the paper we summarise our findings and report on future work being undertaken to integrate Jini and the Grid.
Resumo:
Biodiversity informatics plays a central enabling role in the research community's efforts to address scientific conservation and sustainability issues. Great strides have been made in the past decade establishing a framework for sharing data, where taxonomy and systematics has been perceived as the most prominent discipline involved. To some extent this is inevitable, given the use of species names as the pivot around which information is organised. To address the urgent questions around conservation, land-use, environmental change, sustainability, food security and ecosystem services that are facing Governments worldwide, we need to understand how the ecosystem works. So, we need a systems approach to understanding biodiversity that moves significantly beyond taxonomy and species observations. Such an approach needs to look at the whole system to address species interactions, both with their environment and with other species.It is clear that some barriers to progress are sociological, basically persuading people to use the technological solutions that are already available. This is best addressed by developing more effective systems that deliver immediate benefit to the user, hiding the majority of the technology behind simple user interfaces. An infrastructure should be a space in which activities take place and, as such, should be effectively invisible.This community consultation paper positions the role of biodiversity informatics, for the next decade, presenting the actions needed to link the various biodiversity infrastructures invisibly and to facilitate understanding that can support both business and policy-makers. The community considers the goal in biodiversity informatics to be full integration of the biodiversity research community, including citizens' science, through a commonly-shared, sustainable e-infrastructure across all sub-disciplines that reliably serves science and society alike.
Resumo:
SOA (Service Oriented Architecture), workflow, the Semantic Web, and Grid computing are key enabling information technologies in the development of increasingly sophisticated e-Science infrastructures and application platforms. While the emergence of Cloud computing as a new computing paradigm has provided new directions and opportunities for e-Science infrastructure development, it also presents some challenges. Scientific research is increasingly finding that it is difficult to handle “big data” using traditional data processing techniques. Such challenges demonstrate the need for a comprehensive analysis on using the above mentioned informatics techniques to develop appropriate e-Science infrastructure and platforms in the context of Cloud computing. This survey paper describes recent research advances in applying informatics techniques to facilitate scientific research particularly from the Cloud computing perspective. Our particular contributions include identifying associated research challenges and opportunities, presenting lessons learned, and describing our future vision for applying Cloud computing to e-Science. We believe our research findings can help indicate the future trend of e-Science, and can inform funding and research directions in how to more appropriately employ computing technologies in scientific research. We point out the open research issues hoping to spark new development and innovation in the e-Science field.
Resumo:
Purpose This paper aims to fill the research and knowledge gap in knowledge management studies in Ghana. Knowledge acquisition is one of the unexploited areas in knowledge management literature, especially in the Ghanaian context. This study tries to ascertain the factors affecting knowledge acquisition in Ghanaian universities. Design/methodology/approach The study used the quantitative approach. The cross-sectional survey was adopted as the research design. A questionnaire consisting of Likert scale questions was used to collect data from the respondents. The items and the constructs were derived from the extant literature. The questionnaire was sent to 350 respondents, out of which 250 were returned fully completed. Data were quantitatively analysed using descriptive methods and factor analysis. Findings This study provides empirical evidence about the factors affecting knowledge acquisition in Ghanaian universities. Findings from the study show that programme content, lecturers’ competence, student academic background and attitude and facilities for teaching and learning influence knowledge acquisition in Ghanaian universities. Research limitations/implications Although the study seeks to generalize the findings, this should be cautiously done, as some scholars have advocated for large sample size. Nonetheless, there are some studies that have used sample size less than the one used in this study. Practical implications The study takes notice of the need for Ghanaian universities to use modern facilities and infrastructures such as electronic libraries and information technology equipment and also provide reading rooms to enhance teaching and learning. Originality/value Studies looking at knowledge acquisition in Ghanaian universities are virtually non-existent, and this study provides empirical findings on the factors affecting knowledge acquisition in Ghanaian universities.
Resumo:
A network is a natural structure with which to describe many aspects of a plant pathosystem. The article seeks to set out in a nonmathematical way some of the network concepts that promise to be useful in managing plant disease. The field has been stimulated by developments designed to help understand and manage animal and human disease, as well as by technical infrastructures, such as the internet. It overlaps partly with landscape ecology. The study of networks has helped identify likely ways to reduce flow of disease in traded plants, to find the best sites to monitor as warning sites for annually reinvading disease, and to understand the fundamentals of how a pathogen spreads in different structures. A tension between the free flow of goods or species down communication channels and free flow of pathogens down the same pathways is highlighted.