941 resultados para slifetime-based garbage collection
Resumo:
"Presented at the ASTSWMO 1992 National Solid Waste Forum July 20-22, 1992, Portland, OR."
Resumo:
The article discusses the development of WEBDATANET established in 2011 which aims to create a multidisciplinary network of web-based data collection experts in Europe. Topics include the presence of 190 experts in 30 European countries and abroad, the establishment of web-based teaching and discussion platforms and working groups and task forces. Also discussed is the scope of the research carried by WEBDATANET. In light of the growing importance of web-based data in the social and behavioral sciences, WEBDATANET was established in 2011 as a COST Action (IS 1004) to create a multidisciplinary network of web-based data collection experts: (web) survey methodologists, psychologists, sociologists, linguists, economists, Internet scientists, media and public opinion researchers. The aim was to accumulate and synthesize knowledge regarding methodological issues of web-based data collection (surveys, experiments, tests, non-reactive data, and mobile Internet research), and foster its scientific usage in a broader community.
Resumo:
BACKGROUND Chlamydia trachomatis (CT) and Neisseria gonorrhoeae (NG) are the most frequent causes of bacterial sexually transmitted infections (STIs). Management strategies that reduce losses in the clinical pathway from infection to cure might improve STI control and reduce complications resulting from lack of, or inadequate, treatment. OBJECTIVES To assess the effectiveness and safety of home-based specimen collection as part of the management strategy for Chlamydia trachomatis and Neisseria gonorrhoeae infections compared with clinic-based specimen collection in sexually-active people. SEARCH METHODS We searched the Cochrane Sexually Transmitted Infections Group Specialized Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE and LILACS on 27 May 2015, together with the World Health Organization International Clinical Trials Registry (ICTRP) and ClinicalTrials.gov. We also handsearched conference proceedings, contacted trial authors and reviewed the reference lists of retrieved studies. SELECTION CRITERIA Randomized controlled trials (RCTs) of home-based compared with clinic-based specimen collection in the management of C. trachomatis and N. gonorrhoeae infections. DATA COLLECTION AND ANALYSIS Three review authors independently assessed trials for inclusion, extracted data and assessed risk of bias. We contacted study authors for additional information. We resolved any disagreements through consensus. We used standard methodological procedures recommended by Cochrane. The primary outcome was index case management, defined as the number of participants tested, diagnosed and treated, if test positive. MAIN RESULTS Ten trials involving 10,479 participants were included. There was inconclusive evidence of an effect on the proportion of participants with index case management (defined as individuals tested, diagnosed and treated for CT or NG, or both) in the group with home-based (45/778, 5.8%) compared with clinic-based (51/788, 6.5%) specimen collection (risk ratio (RR) 0.88, 95% confidence interval (CI) 0.60 to 1.29; 3 trials, I² = 0%, 1566 participants, moderate quality). Harms of home-based specimen collection were not evaluated in any trial. All 10 trials compared the proportions of individuals tested. The results for the proportion of participants completing testing had high heterogeneity (I² = 100%) and were not pooled. We could not combine data from individual studies looking at the number of participants tested because the proportions varied widely across the studies, ranging from 30% to 96% in home group and 6% to 97% in clinic group (low-quality evidence). The number of participants with positive test was lower in the home-based specimen collection group (240/2074, 11.6%) compared with the clinic-based group (179/967, 18.5%) (RR 0.72, 95% CI 0.61 to 0.86; 9 trials, I² = 0%, 3041 participants, moderate quality). AUTHORS' CONCLUSIONS Home-based specimen collection could result in similar levels of index case management for CT or NG infection when compared with clinic-based specimen collection. Increases in the proportion of individuals tested as a result of home-based, compared with clinic-based, specimen collection are offset by a lower proportion of positive results. The harms of home-based specimen collection compared with clinic-based specimen collection have not been evaluated. Future RCTs to assess the effectiveness of home-based specimen collection should be designed to measure biological outcomes of STI case management, such as proportion of participants with negative tests for the relevant STI at follow-up.
Resumo:
Cache-coherent non uniform memory access (ccNUMA) architecture is a standard design pattern for contemporary multicore processors, and future generations of architectures are likely to be NUMA. NUMA architectures create new challenges for managed runtime systems. Memory-intensive applications use the system’s distributed memory banks to allocate data, and the automatic memory manager collects garbage left in these memory banks. The garbage collector may need to access remote memory banks, which entails access latency overhead and potential bandwidth saturation for the interconnection between memory banks. This dissertation makes five significant contributions to garbage collection on NUMA systems, with a case study implementation using the Hotspot Java Virtual Machine. It empirically studies data locality for a Stop-The-World garbage collector when tracing connected objects in NUMA heaps. First, it identifies a locality richness which exists naturally in connected objects that contain a root object and its reachable set— ‘rooted sub-graphs’. Second, this dissertation leverages the locality characteristic of rooted sub-graphs to develop a new NUMA-aware garbage collection mechanism. A garbage collector thread processes a local root and its reachable set, which is likely to have a large number of objects in the same NUMA node. Third, a garbage collector thread steals references from sibling threads that run on the same NUMA node to improve data locality. This research evaluates the new NUMA-aware garbage collector using seven benchmarks of an established real-world DaCapo benchmark suite. In addition, evaluation involves a widely used SPECjbb benchmark and Neo4J graph database Java benchmark, as well as an artificial benchmark. The results of the NUMA-aware garbage collector on a multi-hop NUMA architecture show an average of 15% performance improvement. Furthermore, this performance gain is shown to be as a result of an improved NUMA memory access in a ccNUMA system. Fourth, the existing Hotspot JVM adaptive policy for configuring the number of garbage collection threads is shown to be suboptimal for current NUMA machines. The policy uses outdated assumptions and it generates a constant thread count. In fact, the Hotspot JVM still uses this policy in the production version. This research shows that the optimal number of garbage collection threads is application-specific and configuring the optimal number of garbage collection threads yields better collection throughput than the default policy. Fifth, this dissertation designs and implements a runtime technique, which involves heuristics from dynamic collection behavior to calculate an optimal number of garbage collector threads for each collection cycle. The results show an average of 21% improvements to the garbage collection performance for DaCapo benchmarks.
Resumo:
This paper introduces a novel technique for identifying logically related sections of the heap such as recursive data structures, objects that are part of the same multi-component structure, and related groups of objects stored in the same collection/array. When combined withthe lifetime properties of these structures, this information can be used to drive a range of program optimizations including pool allocation, object co-location, static deallocation, and region-based garbage collection. The technique outlined in this paper also improves the efficiency of the static analysis by providing a normal form for the abstract models (speeding the convergence of the static analysis). We focus on two techniques for grouping parts of the heap. The first is a technique for precisely identifying recursive data structures in object-oriented programs based on the types declared in the program. The second technique is a novel method for grouping objects that make up the same composite structure and that allows us to partition the objects stored in a collection/array into groups based on a similarity relation. We provide a parametric component in the similarity relation in order to support specific analysis applications (such as a numeric analysis which would need to partition the objects based on numeric properties of the fields). Using the Barnes-Hut benchmark from the JOlden suite we show how these grouping methods can be used to identify various types of logical structures allowing the application of many region-based program optimizations.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Online animation resources
Resumo:
The capacitated redistricting problem (CRP) has the objective to redefine, under a given criterion, an initial set of districts of an urban area represented by a geographic network. Each node in the network has different types of demands and each district has a limited capacity. Real-world applications consider more than one criteria in the design of the districts, leading to a multicriteria CRP (MCRP). Examples are found in political districting, sales design, street sweeping, garbage collection and mail delivery. This work addresses the MCRP applied to power meter reading and two criteria are considered: compactness and homogeneity of districts. The proposed solution framework is based on a greedy randomized adaptive search procedure and multicriteria scalarization techniques to approximate the Pareto frontier. The computational experiments show the effectiveness of the method for a set of randomly generated networks and for a real-world network extracted from the city of São Paulo. © 2013 Elsevier Ltd.
Resumo:
Pós-graduação em Geografia - IGCE
Resumo:
This article reports about the internet based, second multicenter study (MCS II) of the spine study group (AG WS) of the German trauma association (DGU). It represents a continuation of the first study conducted between the years 1994 and 1996 (MCS I). For the purpose of one common, centralised data capture methodology, a newly developed internet-based data collection system ( http://www.memdoc.org ) of the Institute for Evaluative Research in Orthopaedic Surgery of the University of Bern was used. The aim of this first publication on the MCS II was to describe in detail the new method of data collection and the structure of the developed data base system, via internet. The goal of the study was the assessment of the current state of treatment for fresh traumatic injuries of the thoracolumbar spine in the German speaking part of Europe. For that reason, we intended to collect large number of cases and representative, valid information about the radiographic, clinical and subjective treatment outcomes. Thanks to the new study design of MCS II, not only the common surgical treatment concepts, but also the new and constantly broadening spectrum of spine surgery, i.e. vertebro-/kyphoplasty, computer assisted surgery and navigation, minimal-invasive, and endoscopic techniques, documented and evaluated. We present a first statistical overview and preliminary analysis of 18 centers from Germany and Austria that participated in MCS II. A real time data capture at source was made possible by the constant availability of the data collection system via internet access. Following the principle of an application service provider, software, questionnaires and validation routines are located on a central server, which is accessed from the periphery (hospitals) by means of standard Internet browsers. By that, costly and time consuming software installation and maintenance of local data repositories are avoided and, more importantly, cumbersome migration of data into one integrated database becomes obsolete. Finally, this set-up also replaces traditional systems wherein paper questionnaires were mailed to the central study office and entered by hand whereby incomplete or incorrect forms always represent a resource consuming problem and source of error. With the new study concept and the expanded inclusion criteria of MCS II 1, 251 case histories with admission and surgical data were collected. This remarkable number of interventions documented during 24 months represents an increase of 183% compared to the previously conducted MCS I. The concept and technical feasibility of the MEMdoc data collection system was proven, as the participants of the MCS II succeeded in collecting data ever published on the largest series of patients with spinal injuries treated within a 2 year period.
Resumo:
We present the design of a distributed object system for Prolog, based on adding remote execution and distribution capabilities to a previously existing object system. Remote execution brings RPC into a Prolog system, and its semantics is easy to express in terms of well-known Prolog builtins. The final distributed object design features state mobility and user-transparent network behavior. We sketch an implementation which provides distributed garbage collection and some degree of tolerance to network failures. We provide a preliminary study of the overhead of the communication mechanism for some test cases.
Resumo:
Modern object oriented languages like C# and JAVA enable developers to build complex application in less time. These languages are based on selecting heap allocated pass-by-reference objects for user defined data structures. This simplifies programming by automatically managing memory allocation and deallocation in conjunction with automated garbage collection. This simplification of programming comes at the cost of performance. Using pass-by-reference objects instead of lighter weight pass-by value structs can have memory impact in some cases. These costs can be critical when these application runs on limited resource environments such as mobile devices and cloud computing systems. We explore the problem by using the simple and uniform memory model to improve the performance. In this work we address this problem by providing an automated and sounds static conversion analysis which identifies if a by reference type can be safely converted to a by value type where the conversion may result in performance improvements. This works focus on C# programs. Our approach is based on a combination of syntactic and semantic checks to identify classes that are safe to convert. We evaluate the effectiveness of our work in identifying convertible types and impact of this transformation. The result shows that the transformation of reference type to value type can have substantial performance impact in practice. In our case studies we optimize the performance in Barnes-Hut program which shows total memory allocation decreased by 93% and execution time also reduced by 15%.
Resumo:
ABSTRACT OBJECTIVE To describe elements of vulnerability of victims of snakebite. METHODS This qualitative, descriptive, cross-sectional study had, as theoretical framework, the concept of vulnerability in individual, social, and programmatic dimensions. We interviewed 21 patients admitted into a hospital specialized in the care of accidents caused by venomous animals. The interviews were analyzed according to a discourse analysis technique. RESULTS Patients were mainly young men, living in remote countryside areas, where health services frequently have limited resources. We found social and individual conditions of vulnerability, such as precarious schooling, low professional qualification, housing without access to piped water, no sewage treated, and no regular garbage collection, and lack of knowledge on this health problem. Regarding the programmatic dimension, we found limited accessibility to the health services that could affect the prognosis and the frequency of sequelae and deaths. CONCLUSIONS Considering such vulnerabilities evoke the need to improve the program for control the Accidents by Venomous Animals and the training of health workers, we highlight the potential use of the concept of vulnerability, which may amplify the understanding and the recommendations for the practice and education related to snakebites.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.