835 resultados para distributed databases
Resumo:
INTRODUCTION HIV care and treatment programmes worldwide are transforming as they push to deliver universal access to essential prevention, care and treatment services to persons living with HIV and their communities. The characteristics and capacity of these HIV programmes affect patient outcomes and quality of care. Despite the importance of ensuring optimal outcomes, few studies have addressed the capacity of HIV programmes to deliver comprehensive care. We sought to describe such capacity in HIV programmes in seven regions worldwide. METHODS Staff from 128 sites in 41 countries participating in the International epidemiologic Databases to Evaluate AIDS completed a site survey from 2009 to 2010, including sites in the Asia-Pacific region (n=20), Latin America and the Caribbean (n=7), North America (n=7), Central Africa (n=12), East Africa (n=51), Southern Africa (n=16) and West Africa (n=15). We computed a measure of the comprehensiveness of care based on seven World Health Organization-recommended essential HIV services. RESULTS Most sites reported serving urban (61%; region range (rr): 33-100%) and both adult and paediatric populations (77%; rr: 29-96%). Only 45% of HIV clinics that reported treating children had paediatricians on staff. As for the seven essential services, survey respondents reported that CD4+ cell count testing was available to all but one site, while tuberculosis (TB) screening and community outreach services were available in 80 and 72%, respectively. The remaining four essential services - nutritional support (82%), combination antiretroviral therapy adherence support (88%), prevention of mother-to-child transmission (PMTCT) (94%) and other prevention and clinical management services (97%) - were uniformly available. Approximately half (46%) of sites reported offering all seven services. Newer sites and sites in settings with low rankings on the UN Human Development Index (HDI), especially those in the President's Emergency Plan for AIDS Relief focus countries, tended to offer a more comprehensive array of essential services. HIV care programme characteristics and comprehensiveness varied according to the number of years the site had been in operation and the HDI of the site setting, with more recently established clinics in low-HDI settings reporting a more comprehensive array of available services. Survey respondents frequently identified contact tracing of patients, patient outreach, nutritional counselling, onsite viral load testing, universal TB screening and the provision of isoniazid preventive therapy as unavailable services. CONCLUSIONS This study serves as a baseline for on-going monitoring of the evolution of care delivery over time and lays the groundwork for evaluating HIV treatment outcomes in relation to site capacity for comprehensive care.
Resumo:
Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). {SLAs} define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.
Resumo:
The Interstellar Boundary Explorer (IBEX) observes the IBEX ribbon, which stretches across much of the sky observed in energetic neutral atoms (ENAs). The ribbon covers a narrow (~20°-50°) region that is believed to be roughly perpendicular to the interstellar magnetic field. Superimposed on the IBEX ribbon is the globally distributed flux that is controlled by the processes and properties of the heliosheath. This is a second study that utilizes a previously developed technique to separate ENA emissions in the ribbon from the globally distributed flux. A transparency mask is applied over the ribbon and regions of high emissions. We then solve for the globally distributed flux using an interpolation scheme. Previously, ribbon separation techniques were applied to the first year of IBEX-Hi data at and above 0.71 keV. Here we extend the separation analysis down to 0.2 keV and to five years of IBEX data enabling first maps of the ribbon and the globally distributed flux across the full sky of ENA emissions. Our analysis shows the broadening of the ribbon peak at energies below 0.71 keV and demonstrates the apparent deformation of the ribbon in the nose and heliotail. We show global asymmetries of the heliosheath, including both deflection of the heliotail and differing widths of the lobes, in context of the direction, draping, and compression of the heliospheric magnetic field. We discuss implications of the ribbon maps for the wide array of concepts that attempt to explain the ribbon's origin. Thus, we present the five-year separation of the IBEX ribbon from the globally distributed flux in preparation for a formal IBEX data release of ribbon and globally distributed flux maps to the heliophysics community.
Resumo:
Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.
Resumo:
In this paper we present BitWorker, a platform for community distributed computing based on BitTorrent. Any splittable task can be easily specified by a user in a meta-information task file, such that it can be downloaded and performed by other volunteers. Peers find each other using Distributed Hash Tables, download existing results, and compute missing ones. Unlike existing distributed computing schemes relying on centralized coordination point(s), our scheme is totally distributed, therefore, highly robust. We evaluate the performance of BitWorker using mathematical models and real tests, showing processing and robustness gains. BitWorker is available for download and use by the community.
Resumo:
Background Simple Sequence Repeats (SSRs) are widely used in population genetic studies but their classical development is costly and time-consuming. The ever-increasing available DNA datasets generated by high-throughput techniques offer an inexpensive alternative for SSRs discovery. Expressed Sequence Tags (ESTs) have been widely used as SSR source for plants of economic relevance but their application to non-model species is still modest. Methods Here, we explored the use of publicly available ESTs (GenBank at the National Center for Biotechnology Information-NCBI) for SSRs development in non-model plants, focusing on genera listed by the International Union for the Conservation of Nature (IUCN). We also search two model genera with fully annotated genomes for EST-SSRs, Arabidopsis and Oryza, and used them as controls for genome distribution analyses. Overall, we downloaded 16 031 555 sequences for 258 plant genera which were mined for SSRsand their primers with the help of QDD1. Genome distribution analyses in Oryza and Arabidopsis were done by blasting the sequences with SSR against the Oryza sativa and Arabidopsis thaliana reference genomes implemented in the Basal Local Alignment Tool (BLAST) of the NCBI website. Finally, we performed an empirical test to determine the performance of our EST-SSRs in a few individuals from four species of two eudicot genera, Trifolium and Centaurea. Results We explored a total of 14 498 726 EST sequences from the dbEST database (NCBI) in 257 plant genera from the IUCN Red List. We identify a very large number (17 102) of ready-to-test EST-SSRs in most plant genera (193) at no cost. Overall, dinucleotide and trinucleotide repeats were the prevalent types but the abundance of the various types of repeat differed between taxonomic groups. Control genomes revealed that trinucleotide repeats were mostly located in coding regions while dinucleotide repeats were largely associated with untranslated regions. Our results from the empirical test revealed considerable amplification success and transferability between congenerics. Conclusions The present work represents the first large-scale study developing SSRs by utilizing publicly accessible EST databases in threatened plants. Here we provide a very large number of ready-to-test EST-SSR (17 102) for 193 genera. The cross-species transferability suggests that the number of possible target species would be large. Since trinucleotide repeats are abundant and mainly linked to exons they might be useful in evolutionary and conservation studies. Altogether, our study highly supports the use of EST databases as an extremely affordable and fast alternative for SSR developing in threatened plants.
Resumo:
Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.
Resumo:
With the purpose of rational design of optical materials, distributed atomic polarizabilities of amino acid molecules and their hydrogen-bonded aggregates are calculated in order to identify the most efficient functional groups, able to buildup larger electric susceptibilities in crystals. Moreover, we carefully analyze how the atomic polarizabilities depend on the one-electron basis set or the many-electron Hamiltonian, including both wave function and density functional theory methods. This is useful for selecting the level of theory that best combines high accuracy and low computational costs, very important in particular when using the cluster method to estimate susceptibilities of molecular-based materials.
Resumo:
The population of space debris increased drastically during the last years. Collisions involving massive objects may produce large number of fragments leading to significantly growth of the space debris population. An effective remediation measure in order to stabilize the population in LEO, is therefore the removal of large, massive space debris. To remove these objects, not only precise orbits, but also more detailed information about their attitude states will be required. One important property of an object targeted for removal is its spin period and spin axis orientation. If we observe a rotating object, the observer sees different surface areas of the object which leads to changes in the measured intensity. Rotating objects will produce periodic brightness vari ations with frequencies which are related to the spin periods. Photometric monitoring is the real tool for remote diagnostics of the satellite rotation around its center of mass. This information is also useful, for example, in case of contingency. Moreover, it is also important to take into account the orientation of non-spherical body (e.g. space debris) in the numerical integration of its motion when a close approach with the another spacecr aft is predicted. We introduce the two databases of light curves: the AIUB data base, which contains about a thousand light curves of LEO, MEO and high-altitude debris objects (including a few functional objects) obtained over more than seven years, and the data base of the Astronomical Observatory of Odessa University (Ukraine), which contains the results of more than 10 years of photometric monitoring of functioning satellites and large space debris objects in low Earth orbit. AIUB used its 1m ZIMLAT telescope for all light curves. For tracking low-orbit satellites, the Astronomical Observatory of Odessa used the KT-50 telescope, which has an alt-azimuth mount and allows tracking objects moving at a high angular velocity. The diameter of the KT-50 main mirror is 0.5 m, and the focal length is 3 m. The Odessa's Atlas of light curves includes almost 5,5 thousand light curves for ~500 correlated objects from a time period of 2005-2014. The processing of light curves and the determination of the rotation period in the inertial frame is challenging. Extracted frequencies and reconstructed phases for some interesting targets, e.g. GLONASS satellites, for which also SLR data were available for confirmation, will be presented. The rotation of the Envisat satellite after its sudden failure will be analyzed. The deceleration of its rotation rate within 3 years is studied together with the attempt to determine the orientation of the rotation axis.
Resumo:
Aberrations of the acoustic wave front, caused by spatial variations of the speed-of-sound, are a main limiting factor to the diagnostic power of medical ultrasound imaging. If not accounted for, aberrations result in low resolution and increased side lobe level, over all reducing contrast in deep tissue imaging. Various techniques have been proposed for quantifying aberrations by analysing the arrival time of coherent echoes from so-called guide stars or beacons. In situations where a guide star is missing, aperture-based techniques may give ambiguous results. Moreover, they are conceptually focused on aberrators that can be approximated as a phase screen in front of the probe. We propose a novel technique, where the effect of aberration is detected in the reconstructed image as opposed to the aperture data. The varying local echo phase when changing the transmit beam steering angle directly reflects the varying arrival time of the transmit wave front. This allows sensing the angle-dependent aberration delay in a spatially resolved way, and thus aberration correction for a spatially distributed volume aberrator. In phantoms containing a cylindrical aberrator, we achieved location-independent diffraction-limited resolution as well as accurate display of echo location based on reconstructing the speed-of-sound spatially resolved. First successful volunteer results confirm the clinical potential of the proposed technique.
Resumo:
The convergence between the Eurasian and Arabian plates has created a complicated structural setting in the Eastern Turkish high plateau (ETHP), particularly around the Karlıova Triple Junction (KTJ) where the Eurasian, Arabian, and Anatolian plates intersect. This region of interest includes the junction of the North Anatolian Shear Zone (NASZ) and the East Anatolian Shear Zone (EASZ), which forms the northern border of the westwardly extruding Anatolian Scholle and the western boundary of the ETHP, respectively. In this study, we focused on a poorly studied component of the KTJ, the Varto Fault Zone (VFZ), and the adjacent secondary structures, which have complex structural settings. Through integrated analyses of remote sensing and field observations, we identified a widely distributed transpressional zone where the Varto segment of the VFZ forms the most northern boundary. The other segments, namely, the Leylekdağ and Çayçatı segments, are oblique-reverse faults that are significantly defined by uplifted topography along their strikes. The measured 515 and 265 m of cumulative uplifts for Mt. Leylek and Mt. Dodan, respectively, yield a minimum uplift rate of 0.35 mm/a for the last 2.2 Ma. The multi-oriented secondary structures were mostly correlated with “the distributed strike-slip” and “the distributed transpressional” in analogue experiments. The misfits in strike of some of secondary faults between our observations and the experimental results were justified by about 20° to 25° clockwise restoration of all relevant structures that were palaeomagnetically measured to have happened since ~ 2.8 Ma ago. Our detected fault patterns and their true nature are well aligned as being part of a transpressional tectonic setting that supports previously suggested stationary triple junction models.
Resumo:
A complete reference genome of the Apis mellifera Filamentous virus (AmFV) was determined using Illumina Hiseq sequencing. The AmFV genome is a double stranded DNA molecule of approximately 498,500 nucleotides with a GC content of 50.8%. It encompasses 247 non-overlapping open reading frames (ORFs), equally distributed on both strands, which cover 65% of the genome. While most of the ORFs lacked threshold sequence alignments to reference protein databases, twenty-eight were found to display significant homologies with proteins present in other large double stranded DNA viruses. Remarkably, 13 ORFs had strong similarity with typical baculovirus domains such as PIFs (per os infectivity factor genes: pif-1, pif-2, pif-3 and p74) and BRO (Baculovirus Repeated Open Reading Frame). The putative AmFV DNA polymerase is of type B, but is only distantly related to those of the baculoviruses. The ORFs encoding proteins involved in nucleotide metabolism had the highest percent identity to viral proteins in GenBank. Other notable features include the presence of several collagen-like, chitin-binding, kinesin and pacifastin domains. Due to the large size of the AmFV genome and the inconsistent affiliation with other large double stranded DNA virus families infecting invertebrates, AmFV may belong to a new virus family.
Resumo:
This paper describes the spatial data handling procedures used to create a vector database of the Connecticut shoreline from Coastal Survey Maps. The appendix contains detailed information on how the procedures were implemented using Geographic Transformer Software 5 and ArcGIS 8.3. The project was a joint project of the Connecticut Department of Environmental Protection and the University of Connecticut Center for Geographic Information and Analysis.
Resumo:
Von Otto v. Seemen