975 resultados para Virtual Reference Service
Resumo:
This thesis is mainly concerned with a model calculation for generalized parton distributions (GPDs). We calculate vectorial- and axial GPDs for the N N and N Delta transition in the framework of a light front quark model. This requires the elaboration of a connection between transition amplitudes and GPDs. We provide the first quark model calculations for N Delta GPDs. The examination of transition amplitudes leads to various model independent consistency relations. These relations are not exactly obeyed by our model calculation since the use of the impulse approximation in the light front quark model leads to a violation of Poincare covariance. We explore the impact of this covariance breaking on the GPDs and form factors which we determine in our model calculation and find large effects. The reference frame dependence of our results which originates from the breaking of Poincare covariance can be eliminated by introducing spurious covariants. We extend this formalism in order to obtain frame independent results from our transition amplitudes.
Resumo:
Beamforming entails joint processing of multiple signals received or transmitted by an array of antennas. This thesis addresses the implementation of beamforming in two distinct systems, namely a distributed network of independent sensors, and a broad-band multi-beam satellite network. With the rising popularity of wireless sensors, scientists are taking advantage of the flexibility of these devices, which come with very low implementation costs. Simplicity, however, is intertwined with scarce power resources, which must be carefully rationed to ensure successful measurement campaigns throughout the whole duration of the application. In this scenario, distributed beamforming is a cooperative communication technique, which allows nodes in the network to emulate a virtual antenna array seeking power gains in the order of the size of the network itself, when required to deliver a common message signal to the receiver. To achieve a desired beamforming configuration, however, all nodes in the network must agree upon the same phase reference, which is challenging in a distributed set-up where all devices are independent. The first part of this thesis presents new algorithms for phase alignment, which prove to be more energy efficient than existing solutions. With the ever-growing demand for broad-band connectivity, satellite systems have the great potential to guarantee service where terrestrial systems can not penetrate. In order to satisfy the constantly increasing demand for throughput, satellites are equipped with multi-fed reflector antennas to resolve spatially separated signals. However, incrementing the number of feeds on the payload corresponds to burdening the link between the satellite and the gateway with an extensive amount of signaling, and to possibly calling for much more expensive multiple-gateway infrastructures. This thesis focuses on an on-board non-adaptive signal processing scheme denoted as Coarse Beamforming, whose objective is to reduce the communication load on the link between the ground station and space segment.
Resumo:
The cybernetics revolution of the last years improved a lot our lives, having an immediate access to services and a huge amount of information over the Internet. Nowadays the user is increasingly asked to insert his sensitive information on the Internet, leaving its traces everywhere. But there are some categories of people that cannot risk to reveal their identities on the Internet. Even if born to protect U.S. intelligence communications online, nowadays Tor is the most famous low-latency network, that guarantees both anonymity and privacy of its users. The aim of this thesis project is to well understand how the Tor protocol works, not only studying its theory, but also implementing those concepts in practice, having a particular attention for security topics. In order to run a Tor private network, that emulates the real one, a virtual testing environment has been configured. This behavior allows to conduct experiments without putting at risk anonymity and privacy of real users. We used a Tor patch, that stores TLS and circuit keys, to be given as inputs to a Tor dissector for Wireshark, in order to obtain decrypted and decoded traffic. Observing clear traffic allowed us to well check the protocol outline and to have a proof of the format of each cell. Besides, these tools allowed to identify a traffic pattern, used to conduct a traffic correlation attack to passively deanonymize hidden service clients. The attacker, controlling two nodes of the Tor network, is able to link a request for a given hidden server to the client who did it, deanonymizing him. The robustness of the traffic pattern and the statistics, such as the true positive rate, and the false positive rate, of the attack are object of a potential future work.
Resumo:
Purpose:To determine the potential of minimally invasive postmortem computed tomographic (CT) angiography combined with image-guided tissue biopsy of the myocardium and lungs in decedents who were thought to have died of acute chest disease and to compare this method with conventional autopsy as the reference standard.Materials and Methods:The responsible justice department and ethics committee approved this study. Twenty corpses (four female corpses and 16 male corpses; age range, 15-80 years), all of whom were reported to have had antemortem acute chest pain, were imaged with postmortem whole-body CT angiography and underwent standardized image-guided biopsy. The standard included three biopsies of the myocardium and a single biopsy of bilateral central lung tissue. Additional biopsies of pulmonary clots for differentiation of pulmonary embolism and postmortem organized thrombus were performed after initial analysis of the cross-sectional images. Subsequent traditional autopsy with sampling of histologic specimens was performed in all cases. Thereafter, conventional histologic and autopsy reports were compared with postmortem CT angiography and CT-guided biopsy findings. A Cohen k coefficient analysis was performed to explore the effect of the clustered nature of the data.Results:In 19 of the 20 cadavers, findings at postmortem CT angiography in combination with CT-guided biopsy validated the cause of death found at traditional autopsy. In one cadaver, early myocardial infarction of the papillary muscles had been missed. The Cohen κ coefficient was 0.94. There were four instances of pulmonary embolism, three aortic dissections (Stanford type A), three myocardial infarctions, three instances of fresh coronary thrombosis, three cases of obstructive coronary artery disease, one ruptured ulcer of the ascending aorta, one ruptured aneurysm of the right subclavian artery, one case of myocarditis, and one pulmonary malignancy with pulmonary artery erosion. In seven of 20 cadavers, CT-guided biopsy provided additional histopathologic information that substantiated the final diagnosis of the cause of death.Conclusion:Postmortem CT angiography combined with image-guided biopsy, because of their minimally invasive nature, have a potential role in the detection of the cause of death after acute chest pain.© RSNA, 2012.
Resumo:
The past decade has seen the energy consumption in servers and Internet Data Centers (IDCs) skyrocket. A recent survey estimated that the worldwide spending on servers and cooling have risen to above $30 billion and is likely to exceed spending on the new server hardware . The rapid rise in energy consumption has posted a serious threat to both energy resources and the environment, which makes green computing not only worthwhile but also necessary. This dissertation intends to tackle the challenges of both reducing the energy consumption of server systems and by reducing the cost for Online Service Providers (OSPs). Two distinct subsystems account for most of IDC’s power: the server system, which accounts for 56% of the total power consumption of an IDC, and the cooling and humidifcation systems, which accounts for about 30% of the total power consumption. The server system dominates the energy consumption of an IDC, and its power draw can vary drastically with data center utilization. In this dissertation, we propose three models to achieve energy effciency in web server clusters: an energy proportional model, an optimal server allocation and frequency adjustment strategy, and a constrained Markov model. The proposed models have combined Dynamic Voltage/Frequency Scaling (DV/FS) and Vary-On, Vary-off (VOVF) mechanisms that work together for more energy savings. Meanwhile, corresponding strategies are proposed to deal with the transition overheads. We further extend server energy management to the IDC’s costs management, helping the OSPs to conserve, manage their own electricity cost, and lower the carbon emissions. We have developed an optimal energy-aware load dispatching strategy that periodically maps more requests to the locations with lower electricity prices. A carbon emission limit is placed, and the volatility of the carbon offset market is also considered. Two energy effcient strategies are applied to the server system and the cooling system respectively. With the rapid development of cloud services, we also carry out research to reduce the server energy in cloud computing environments. In this work, we propose a new live virtual machine (VM) placement scheme that can effectively map VMs to Physical Machines (PMs) with substantial energy savings in a heterogeneous server cluster. A VM/PM mapping probability matrix is constructed, in which each VM request is assigned with a probability running on PMs. The VM/PM mapping probability matrix takes into account resource limitations, VM operation overheads, server reliability as well as energy effciency. The evolution of Internet Data Centers and the increasing demands of web services raise great challenges to improve the energy effciency of IDCs. We also express several potential areas for future research in each chapter.
Resumo:
The use of virtual reality as tool in the area of spatial cognition raises the question of the quality of learning transfer from a virtual to a real environment. It is first necessary to determine with healthy subjects, the cognitive aids that improve the quality of transfer and the conditions required, especially since virtual reality can be used as effective tool in cognitive rehabilitation. The purpose of this study was to investigate the influence of the exploration mode of virtual environment (Passive vs. Active) according to Route complexity (Simple vs. Complex) on the quality of spatial knowledge transfer in three spatial tasks. Ninety subjects (45 men and 45 women) participated. Spatial learning was evaluated by Wayfinding, sketch-mapping and picture classification tasks in the context of the Bordeaux district. In the Wayfinding task, results indicated that active learning in a Virtual Environment (VE) increased the performances compared to the passive learning condition, irrespective of the route complexity factor. In the Sketch-mapping task, active learning in a VE helped the subjects to transfer their spatial knowledge from the VE to reality, but only when the route was complex. In the Picture classification task, active learning in a VE when the route was complex did not help the subjects to transfer their spatial knowledge. These results are explained in terms of knowledge levels and frame/strategy of reference [SW75, PL81, TH82].
Resumo:
This paper addresses the novel notion of offering a radio access network as a service. Its components may be instantiated on general purpose platforms with pooled resources (both radio and hardware ones) dimensioned on-demand, elastically and following the pay-per-use principle. A novel architecture is proposed that supports this concept. The architecture's success is in its modularity, well-defined functional elements and clean separation between operational and control functions. By moving much processing traditionally located in hardware for computation in the cloud, it allows the optimisation of hardware utilization and reduction of deployment and operation costs. It enables operators to upgrade their network as well as quickly deploy and adapt resources to demand. Also, new players may easily enter the market, permitting a virtual network operator to provide connectivity to its users.
Resumo:
The contribution of Starlette, Stella, and AJI-SAI is currently neglected when defining the International Terrestrial Reference Frame, despite a long time series of precise SLR observations and a huge amount of available data. The inferior accuracy of the orbits of low orbiting geodetic satellites is the main reason for this neglect. The Analysis Centers of the International Laser Ranging Service (ILRS ACs) do, however, consider including low orbiting geodetic satellites for deriving the standard ILRS products based on LAGEOS and Etalon satellites, instead of the sparsely observed, and thus, virtually negligible Etalons. We process ten years of SLR observations to Starlette, Stella, AJISAI, and LAGEOS and we assess the impact of these Low Earth Orbiting (LEO) SLR satellites on the SLR-derived parameters. We study different orbit parameterizations, in particular different arc lengths and the impact of pseudo-stochastic pulses and dynamical orbit parameters on the quality of the solutions. We found that the repeatability of the East and North components of station coordinates, the quality of polar coordinates, and the scale estimates of the reference are improved when combining LAGEOS with low orbiting SLR satellites. In the multi-SLR solutions, the scale and the Z component of geocenter coordinates are less affected by deficiencies in solar radiation pressure modeling than in the LAGEOS-1/2 solutions, due to substantially reduced correlations between the Z geocenter coordinate and empirical orbit parameters. Eventually, we found that the standard values of Center-of-mass corrections (CoM) for geodetic LEO satellites are not valid for the currently operating SLR systems. The variations of station-dependent differential range biases reach 52 and 25 mm for AJISAI and Starlette/Stella, respectively, which is why estimating station dependent range biases or using station-dependent CoM, instead of one value for all SLR stations, is strongly recommended.This clearly indicates that the ILRS effort to produce CoM corrections for each satellite, which are site-specific and depend on the system characteristics at the time of tracking,is very important and needs to be implemented in the SLR data analysis.
Resumo:
Over the last few years Facebook has become a widespread and continuously expanding medium of communication. Being a new medium of social interaction, Facebook produces its own communication style. My focus of analysis is how Facebook users from the city of Malaga create this style by means of phonic features typical of the Andalusian variety and how the users reflect on the use of these phonic features. This project is based on a theoretical framework which combines variationist sociolinguistics with CMC to study the emergence of a style peculiar of the online social networks. In a corpus of Facebook users from three zones of Malaga, I have analysed the use of non-standard phonic features and then compared them with the same features in a reference corpus collected on three beaches of Malaga. From this comparison it can be deduced that the analysed social and linguistic factors work differently in real and virtual speech. Due to these different uses we can consider the peculiar electronic communication of Facebook as a style constrained by the electronic medium. It is a style which serves the users to create social meaning and to express their linguistic identities.
Resumo:
Recently telecommunication industry benefits from infrastructure sharing, one of the most fundamental enablers of cloud computing, leading to emergence of the Mobile Virtual Network Operator (MVNO) concept. The most momentous intents by this approach are the support of on-demand provisioning and elasticity of virtualized mobile network components, based on data traffic load. To realize it, during operation and management procedures, the virtualized services need be triggered in order to scale-up/down or scale-out/in an instance. In this paper we propose an architecture called MOBaaS (Mobility and Bandwidth Availability Prediction as a Service), comprising two algorithms in order to predict user(s) mobility and network link bandwidth availability, that can be implemented in cloud based mobile network structure and can be used as a support service by any other virtualized mobile network services. MOBaaS can provide prediction information in order to generate required triggers for on-demand deploying, provisioning, disposing of virtualized network components. This information can be used for self-adaptation procedures and optimal network function configuration during run-time operation, as well. Through the preliminary experiments with the prototype implementation on the OpenStack platform, we evaluated and confirmed the feasibility and the effectiveness of the prediction algorithms and the proposed architecture.
Resumo:
Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.
Resumo:
Background Tools to explore large compound databases in search for analogs of query molecules provide a strategically important support in drug discovery to help identify available analogs of any given reference or hit compound by ligand based virtual screening (LBVS). We recently showed that large databases can be formatted for very fast searching with various 2D-fingerprints using the city-block distance as similarity measure, in particular a 2D-atom pair fingerprint (APfp) and the related category extended atom pair fingerprint (Xfp) which efficiently encode molecular shape and pharmacophores, but do not perceive stereochemistry. Here we investigated related 3D-atom pair fingerprints to enable rapid stereoselective searches in the ZINC database (23.2 million 3D structures). Results Molecular fingerprints counting atom pairs at increasing through-space distance intervals were designed using either all atoms (16-bit 3DAPfp) or different atom categories (80-bit 3DXfp). These 3D-fingerprints retrieved molecular shape and pharmacophore analogs (defined by OpenEye ROCS scoring functions) of 110,000 compounds from the Cambridge Structural Database with equal or better accuracy than the 2D-fingerprints APfp and Xfp, and showed comparable performance in recovering actives from decoys in the DUD database. LBVS by 3DXfp or 3DAPfp similarity was stereoselective and gave very different analogs when starting from different diastereomers of the same chiral drug. Results were also different from LBVS with the parent 2D-fingerprints Xfp or APfp. 3D- and 2D-fingerprints also gave very different results in LBVS of folded molecules where through-space distances between atom pairs are much shorter than topological distances. Conclusions 3DAPfp and 3DXfp are suitable for stereoselective searches for shape and pharmacophore analogs of query molecules in large databases. Web-browsers for searching ZINC by 3DAPfp and 3DXfp similarity are accessible at www.gdb.unibe.ch webcite and should provide useful assistance to drug discovery projects.
Resumo:
Spatial Data Infrastructures have become a methodological and technological benchmark enabling distributed access to historical-cartographic archives. However, it is essential to offer enhanced virtual tools that imitate the current processes and methodologies that are carried out by librarians, historians and academics in the existing map libraries around the world. These virtual processes must be supported by a generic framework for managing, querying, and accessing distributed georeferenced resources and other content types such as scientific data or information. The authors have designed and developed support tools to provide enriched browsing, measurement and geometrical analysis capabilities, and dynamical querying methods, based on SDI foundations. The DIGMAP engine and the IBERCARTO collection enable access to georeferenced historical-cartographical archives. Based on lessons learned from the CartoVIRTUAL and DynCoopNet projects, a generic service architecture scheme is proposed. This way, it is possible to achieve the integration of virtual map rooms and SDI technologies bringing support to researchers within the historical and social domains.
Resumo:
As a common reference for many in-development standards and execution frameworks, special attention is being paid to Service-Oriented Architectures. SOAs modeling, however, is an area in which a consensus has not being achieved. Currently, standardization organizations are defining proposals to offer a solution to this problem. Nevertheless, until very recently, non-functional aspects of services have not been considered for standardization processes. In particular, there exists a lack of a design solution that permits an independent development of the functional and non-functional concerns of SOAs, allowing that each concern be addressed in a convenient manner in early stages of the development, in a way that could guarantee the quality of this type of systems. This paper, leveraging on previous work, presents an approach to integrate security-related non-functional aspects (such as confidentiality, integrity, and access control) in the development of services.