866 resultados para Infrastructures sanitaires


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data quality is a difficult notion to define precisely, and different communities have different views and understandings of the subject. This causes confusion, a lack of harmonization of data across communities and omission of vital quality information. For some existing data infrastructures, data quality standards cannot address the problem adequately and cannot full all user needs or cover all concepts of data quality. In this paper we discuss some philosophical issues on data quality. We identify actual user needs on data quality, review existing standards and specification on data quality, and propose an integrated model for data quality in the eld of Earth observation. We also propose a practical mechanism for applying the integrated quality information model to large number of datasets through metadata inheritance. While our data quality management approach is in the domain of Earth observation, we believe the ideas and methodologies for data quality management can be applied to wider domains and disciplines to facilitate quality-enabled scientific research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Because metadata that underlies semantic web applications is gathered from distributed and heterogeneous data sources, it is important to ensure its quality (i.e., reduce duplicates, spelling errors, ambiguities). However, current infrastructures that acquire and integrate semantic data have only marginally addressed the issue of metadata quality. In this paper we present our metadata acquisition infrastructure, ASDI, which pays special attention to ensuring that high quality metadata is derived. Central to the architecture of ASDI is a verification engine that relies on several semantic web tools to check the quality of the derived data. We tested our prototype in the context of building a semantic web portal for our lab, KMi. An experimental evaluation comparing the automatically extracted data against manual annotations indicates that the verification engine enhances the quality of the extracted semantic metadata.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Indian Petroleum Industry is passing through a very dynamic business environment due to liberalization. Effective project management for developing new infrastructures and maintaining the existing facilities has been considered as one of the means for remaining competitive but these practices suffer from many shortcomings, as time, cost and quality non-achievements are part and parcel of almost every project. This study focuses on identifying the specific causes of project failure by demonstrating first the characteristics of projects in Indian Petroleum industry and suggests some remedial measures for resolving these issues. The suggested project management model is integrated through information management system and demonstrated through a case study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With their compact spectrum and high tolerance to residual chromatic dispersion, duobinary formats are attractive for the deployment of 40 Gb/s technology on 10 Gb/s WDM Long-Haul transmission infrastructures. Here, we compare the robustness of various duobinary formats when facing 40 Gb/s transmission impairments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: To understand the tensions that servitization activities create between actors within networks. Design/methodology/approach: Interviews were conducted with manufacturers, intermediaries and customers across a range of industrial sectors. Findings: Tensions relating to two key sets of capabilities are identified: in developing or acquiring (i) operant technical expertise and (ii) operand service infrastructure. The former tension concerns whom knowledge is co-created with and where expertise resides. The latter involves a territorial investment component; firms developing strategies to acquire greater access to, or ownership of, infrastructures closer to customers. Developing and acquiring these capabilities is a strategic decision on the part of managers of servitizing firms, in order to gain recognized power and control in a particular territory. Originality/value: This paper explores how firms’ servitization activities involve value appropriation (from the rest of the network), contrasting with the narrative norm for servitization: that it creates additional value. There is a need to understand the tensions that servitization activities create within networks. Some firms may be able to improve servitization performance through co-operation rather than competition, generating co-opetitive relationships. Others may need to become much more aggressive, if they are to take a greater share of the ‘value’ from the value chain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Румен В. Николов - Статията анализира необходимостта от институционални промени в училищата и университетите с цел да се адаптират към съвременните изисквания на обществото на знанието. Паралелно се анализират феноменът на електронното обучение, глобалната образователна реформа и необходимостта от разработването и прилагането на нови педагогически модели. В статията е поставен акцент върху Уеб 2.0 технологиите и електронните инфраструктури, както и върху тяхното влияние върху образованието и научните изледвания в училищата и университетите. Професионална квалификация на учителите, която да е проектирана така, че да отговаря на новите предизвикателства, се разглежда като ключов фактор за успешното навлизане на новите технологии в училище. Важно е да се отбележи необходимостта от разработка на стратегия за обучение на учителите през целия живот, която да отчита съвременните научни постижения в технологично- обогатеното обучение и новите теории за ученето. Препоръчва се изграждането на социални умения и компетенции, които са подходящи за работа в една Уеб 2.0 базирана учебна среда и с глобалния социален софтуер, да се включи в учебните планове и програми както на учениците, така и в курсовете за подготовка на учители.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The emergence of innovative and revolutionary Integration Technologies (IntTech) has highly influenced the local government authorities (LGAs) in their decision-making process. LGAs that plan to adopt such IntTech may consider this as a serious investment. Advocates, however, claim that such IntTech have emerged to overcome the integration problems at all levels (e.g. data, object and process). With the emergence of electronic government (e-Government), LGAs have turned to IntTech to fully automate and offer their services on-line and integrate their IT infrastructures. While earlier research on the adoption of IntTech has considered several factors (e.g. pressure, technological, support, and financial), inadequate attention and resources have been applied in systematically investigating the individual, decision and organisational context factors, influencing top management's decisions for adopting IntTech in LGAs. It is a highly considered phenomenon that the success of an organisation's operations relies heavily on understanding an individual's attitudes and behaviours, the surrounding context and the type of decisions taken. Based on empirical evidence gathered through two intensive case studies, this paper attempts to investigate the factors that influence decision makers while adopting IntTech. The findings illustrate two different doctrines - one inclined and receptive towards taking risky decisions, the other disinclined. Several underlying rationales can be attributed to such mind-sets in LGAs. The authors aim to contribute to the body of knowledge by exploring the factors influencing top management's decision-making process while adopting IntTech vital for facilitating LGAs' operational reforms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distributed fibre sensors provide unique capabilities for monitoring large infrastructures with high resolution. Practically, all these sensors are based on some kind of backscattering interaction. A pulsed activating signal is launched on one side of the sensing fibre and the backscattered signal is read as a function of the time of flight of the pulse along the fibre. A key limitation in the measurement range of all these sensors is introduced by fibre attenuation. As the pulse travels along the fibre, the losses in the fibre cause a drop of signal contrast and consequently a growth in the measurement uncertainty. In typical single-mode fibres, attenuation imposes a range limit of less than 30km, for resolutions in the order of 1-2 meters. An interesting improvement in this performance can be considered by using distributed amplification along the fibre [1]. Distributed amplification allows having a more homogeneous signal power along the sensing fibre, which also enables reducing the signal power at the input and therefore avoiding nonlinearities. However, in long structures (≥ 50 km), plain distributed amplification does not perfectly compensate the losses and significant power variations along the fibre are to be expected, leading to inevitable limitations in the measurements. From this perspective, it is simple to understand intuitively that the best possible solution for distributed sensors would be offered by a virtually transparent fibre, i.e. a fibre exhibiting effectively zero attenuation in the spectral region of the pulse. In addition, it can be shown that lossless transmission is the working point that allows the minimization of the amplified spontaneous emission (ASE) noise build-up. © 2011 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mobile communication and networking infrastructures play an important role in the development of smart cities, to support real-time information exchange and management required in modern urbanization. Mobile WiFi devices that help offloading data traffic from the macro-cell base station and serve the end users within a closer range can significantly improve the connectivity of wireless communications between essential components including infrastructural and human devices in a city. However, this offloading function through interworking between LTE and WiFi systems will change the pattern of resource distributions operated by the base station. In this paper, a resource allocation scheme is proposed to ensure stable service coverage and end-user quality of experience (QoE) when offloading takes place in a macro-cell environment. In this scheme, a rate redistribution algorithm is derived to form a parametric scheduler to meet the required levels of efficiency and fairness, guided by a no-reference quality assessment metric. We show that the performance of resource allocation can be regulated by this scheduler without affecting the service coverage offered by the WLAN access point. The performances of different interworking scenarios and macro-cell scheduling policies are also compared.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

UncertWeb is a European research project running from 2010-2013 that will realize the uncertainty enabled model web. The assumption is that data services, in order to be useful, need to provide information about the accuracy or uncertainty of the data in a machine-readable form. Models taking these data as imput should understand this and propagate errors through model computations, and quantify and communicate errors or uncertainties generated by the model approximations. The project will develop technology to realize this and provide demonstration case studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, wireless communication infrastructures have been widely deployed for both personal and business applications. IEEE 802.11 series Wireless Local Area Network (WLAN) standards attract lots of attention due to their low cost and high data rate. Wireless ad hoc networks which use IEEE 802.11 standards are one of hot spots of recent network research. Designing appropriate Media Access Control (MAC) layer protocols is one of the key issues for wireless ad hoc networks. ^ Existing wireless applications typically use omni-directional antennas. When using an omni-directional antenna, the gain of the antenna in all directions is the same. Due to the nature of the Distributed Coordination Function (DCF) mechanism of IEEE 802.11 standards, only one of the one-hop neighbors can send data at one time. Nodes other than the sender and the receiver must be either in idle or listening state, otherwise collisions could occur. The downside of the omni-directionality of antennas is that the spatial reuse ratio is low and the capacity of the network is considerably limited. ^ It is therefore obvious that the directional antenna has been introduced to improve spatial reutilization. As we know, a directional antenna has the following benefits. It can improve transport capacity by decreasing interference of a directional main lobe. It can increase coverage range due to a higher SINR (Signal Interference to Noise Ratio), i.e., with the same power consumption, better connectivity can be achieved. And the usage of power can be reduced, i.e., for the same coverage, a transmitter can reduce its power consumption. ^ To utilizing the advantages of directional antennas, we propose a relay-enabled MAC protocol. Two relay nodes are chosen to forward data when the channel condition of direct link from the sender to the receiver is poor. The two relay nodes can transfer data at the same time and a pipelined data transmission can be achieved by using directional antennas. The throughput can be improved significant when introducing the relay-enabled MAC protocol. ^ Besides the strong points, directional antennas also have some explicit drawbacks, such as the hidden terminal and deafness problems and the requirements of retaining location information for each node. Therefore, an omni-directional antenna should be used in some situations. The combination use of omni-directional and directional antennas leads to the problem of configuring heterogeneous antennas, i e., given a network topology and a traffic pattern, we need to find a tradeoff between using omni-directional and using directional antennas to obtain a better network performance over this configuration. ^ Directly and mathematically establishing the relationship between the network performance and the antenna configurations is extremely difficult, if not intractable. Therefore, in this research, we proposed several clustering-based methods to obtain approximate solutions for heterogeneous antennas configuration problem, which can improve network performance significantly. ^ Our proposed methods consist of two steps. The first step (i.e., clustering links) is to cluster the links into different groups based on the matrix-based system model. After being clustered, the links in the same group have similar neighborhood nodes and will use the same type of antenna. The second step (i.e., labeling links) is to decide the type of antenna for each group. For heterogeneous antennas, some groups of links will use directional antenna and others will adopt omni-directional antenna. Experiments are conducted to compare the proposed methods with existing methods. Experimental results demonstrate that our clustering-based methods can improve the network performance significantly. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of "cloud computing" services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: (1) An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. (2) A performance prediction methodology applicable to the target environment. (3) A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20–30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Buildings and other infrastructures located in the coastal regions of the US have a higher level of wind vulnerability. Reducing the increasing property losses and causalities associated with severe windstorms has been the central research focus of the wind engineering community. The present wind engineering toolbox consists of building codes and standards, laboratory experiments, and field measurements. The American Society of Civil Engineers (ASCE) 7 standard provides wind loads only for buildings with common shapes. For complex cases it refers to physical modeling. Although this option can be economically viable for large projects, it is not cost-effective for low-rise residential houses. To circumvent these limitations, a numerical approach based on the techniques of Computational Fluid Dynamics (CFD) has been developed. The recent advance in computing technology and significant developments in turbulence modeling is making numerical evaluation of wind effects a more affordable approach. The present study targeted those cases that are not addressed by the standards. These include wind loads on complex roofs for low-rise buildings, aerodynamics of tall buildings, and effects of complex surrounding buildings. Among all the turbulence models investigated, the large eddy simulation (LES) model performed the best in predicting wind loads. The application of a spatially evolving time-dependent wind velocity field with the relevant turbulence structures at the inlet boundaries was found to be essential. All the results were compared and validated with experimental data. The study also revealed CFD's unique flow visualization and aerodynamic data generation capabilities along with a better understanding of the complex three-dimensional aerodynamics of wind-structure interactions. With the proper modeling that realistically represents the actual turbulent atmospheric boundary layer flow, CFD can offer an economical alternative to the existing wind engineering tools. CFD's easy accessibility is expected to transform the practice of structural design for wind, resulting in more wind-resilient and sustainable systems by encouraging optimal aerodynamic and sustainable structural/building design. Thus, this method will help ensure public safety and reduce economic losses due to wind perils.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The emergence of a technology-intensive economy requires the transformation of business models in the hospitality industry Established companies can face technological, cultural, organizations and relationship barriers in moving from a traditional business model to an e-business model. The authors suggest that market, learning, and business process orientations at the organizational level can help remove some of the barriers toward e-business and facilitate the development of e-business within existing organizational infrastructures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.