921 resultados para Radiochemical laboratories.
Resumo:
Understanding the nature of the workloads and system demands created by users of the World Wide Web is crucial to properly designing and provisioning Web services. Previous measurements of Web client workloads have been shown to exhibit a number of characteristic features; however, it is not clear how those features may be changing with time. In this study we compare two measurements of Web client workloads separated in time by three years, both captured from the same computing facility at Boston University. The older dataset, obtained in 1995, is well-known in the research literature and has been the basis for a wide variety of studies. The newer dataset was captured in 1998 and is comparable in size to the older dataset. The new dataset has the drawback that the collection of users measured may no longer be representative of general Web users; however using it has the advantage that many comparisons can be drawn more clearly than would be possible using a new, different source of measurement. Our results fall into two categories. First we compare the statistical and distributional properties of Web requests across the two datasets. This serves to reinforce and deepen our understanding of the characteristic statistical properties of Web client requests. We find that the kinds of distributions that best describe document sizes have not changed between 1995 and 1998, although specific values of the distributional parameters are different. Second, we explore the question of how the observed differences in the properties of Web client requests, particularly the popularity and temporal locality properties, affect the potential for Web file caching in the network. We find that for the computing facility represented by our traces between 1995 and 1998, (1) the benefits of using size-based caching policies have diminished; and (2) the potential for caching requested files in the network has declined.
Resumo:
In a recent paper (Changes in Web Client Access Patterns: Characteristics and Caching Implications by Barford, Bestavros, Bradley, and Crovella) we performed a variety of analyses upon user traces collected in the Boston University Computer Science department in 1995 and 1998. A sanitized version of the 1995 trace has been publicly available for some time; the 1998 trace has now been sanitized, and is available from: http://www.cs.bu.edu/techreports/1999-011-usertrace-98.gz ftp://ftp.cs.bu.edu/techreports/1999-011-usertrace-98.gz This memo discusses the format of this public version of the log, and includes additional discussion of how the data was collected, how the log was sanitized, what this log is and is not useful for, and areas of potential future research interest.
Resumo:
— Consideration of how people respond to the question What is this? has suggested new problem frontiers for pattern recognition and information fusion, as well as neural systems that embody the cognitive transformation of declarative information into relational knowledge. In contrast to traditional classification methods, which aim to find the single correct label for each exemplar (This is a car), the new approach discovers rules that embody coherent relationships among labels which would otherwise appear contradictory to a learning system (This is a car, that is a vehicle, over there is a sedan). This talk will describe how an individual who experiences exemplars in real time, with each exemplar trained on at most one category label, can autonomously discover a hierarchy of cognitive rules, thereby converting local information into global knowledge. Computational examples are based on the observation that sensors working at different times, locations, and spatial scales, and experts with different goals, languages, and situations, may produce apparently inconsistent image labels, which are reconciled by implicit underlying relationships that the network’s learning process discovers. The ARTMAP information fusion system can, moreover, integrate multiple separate knowledge hierarchies, by fusing independent domains into a unified structure. In the process, the system discovers cross-domain rules, inferring multilevel relationships among groups of output classes, without any supervised labeling of these relationships. In order to self-organize its expert system, the ARTMAP information fusion network features distributed code representations which exploit the model’s intrinsic capacity for one-to-many learning (This is a car and a vehicle and a sedan) as well as many-to-one learning (Each of those vehicles is a car). Fusion system software, testbed datasets, and articles are available from http://cns.bu.edu/techlab.
Resumo:
Do humans and animals learn exemplars or prototypes when they categorize objects and events in the world? How are different degrees of abstraction realized through learning by neurons in inferotemporal and prefrontal cortex? How do top-down expectations influence the course of learning? Thirty related human cognitive experiments (the 5-4 category structure) have been used to test competing views in the prototype-exemplar debate. In these experiments, during the test phase, subjects unlearn in a characteristic way items that they had learned to categorize perfectly in the training phase. Many cognitive models do not describe how an individual learns or forgets such categories through time. Adaptive Resonance Theory (ART) neural models provide such a description, and also clarify both psychological and neurobiological data. Matching of bottom-up signals with learned top-down expectations plays a key role in ART model learning. Here, an ART model is used to learn incrementally in response to 5-4 category structure stimuli. Simulation results agree with experimental data, achieving perfect categorization in training and a good match to the pattern of errors exhibited by human subjects in the testing phase. These results show how the model learns both prototypes and certain exemplars in the training phase. ART prototypes are, however, unlike the ones posited in the traditional prototype-exemplar debate. Rather, they are critical patterns of features to which a subject learns to pay attention based on past predictive success and the order in which exemplars are experienced. Perturbations of old memories by newly arriving test items generate a performance curve that closely matches the performance pattern of human subjects. The model also clarifies exemplar-based accounts of data concerning amnesia.
Resumo:
Memories in Adaptive Resonance Theory (ART) networks are based on matched patterns that focus attention on those portions of bottom-up inputs that match active top-down expectations. While this learning strategy has proved successful for both brain models and applications, computational examples show that attention to early critical features may later distort memory representations during online fast learning. For supervised learning, biased ARTMAP (bARTMAP) solves the problem of over-emphasis on early critical features by directing attention away from previously attended features after the system makes a predictive error. Small-scale, hand-computed analog and binary examples illustrate key model dynamics. Twodimensional simulation examples demonstrate the evolution of bARTMAP memories as they are learned online. Benchmark simulations show that featural biasing also improves performance on large-scale examples. One example, which predicts movie genres and is based, in part, on the Netflix Prize database, was developed for this project. Both first principles and consistent performance improvements on all simulation studies suggest that featural biasing should be incorporated by default in all ARTMAP systems. Benchmark datasets and bARTMAP code are available from the CNS Technology Lab Website: http://techlab.bu.edu/bART/.
Resumo:
Computational models of learning typically train on labeled input patterns (supervised learning), unlabeled input patterns (unsupervised learning), or a combination of the two (semisupervised learning). In each case input patterns have a fixed number of features throughout training and testing. Human and machine learning contexts present additional opportunities for expanding incomplete knowledge from formal training, via self-directed learning that incorporates features not previously experienced. This article defines a new self-supervised learning paradigm to address these richer learning contexts, introducing a neural network called self-supervised ARTMAP. Self-supervised learning integrates knowledge from a teacher (labeled patterns with some features), knowledge from the environment (unlabeled patterns with more features), and knowledge from internal model activation (self-labeled patterns). Self-supervised ARTMAP learns about novel features from unlabeled patterns without destroying partial knowledge previously acquired from labeled patterns. A category selection function bases system predictions on known features, and distributed network activation scales unlabeled learning to prediction confidence. Slow distributed learning on unlabeled patterns focuses on novel features and confident predictions, defining classification boundaries that were ambiguous in the labeled patterns. Self-supervised ARTMAP improves test accuracy on illustrative lowdimensional problems and on high-dimensional benchmarks. Model code and benchmark data are available from: http://techlab.bu.edu/SSART/.
Resumo:
SyNAPSE program of the Defense Advanced Projects Research Agency (Hewlett-Packard Company, subcontract under DARPA prime contract HR0011-09-3-0001, and HRL Laboratories LLC, subcontract #801881-BS under DARPA prime contract HR0011-09-C-0001); CELEST, an NSF Science of Learning Center (SBE-0354378)
Resumo:
SyNAPSE program of the Defense Advanced Projects Research Agency (HRL Laboratories LLC, subcontract #801881-BS under DARPA prime contract HR0011-09-C-0001); CELEST, a National Science Foundation Science of Learning Center (SBE-0354378)
Resumo:
This article presents a new neural pattern recognition architecture on multichannel data representation. The architecture emploies generalized ART modules as building blocks to construct a supervised learning system generating recognition codes on channels dynamically selected in context using serial and parallel match trackings led by inter-ART vigilance signals.
Resumo:
A neural network model, called an FBF network, is proposed for automatic parallel separation of multiple image figures from each other and their backgrounds in noisy grayscale or multi-colored images. The figures can then be processed in parallel by an array of self-organizing Adaptive Resonance Theory (ART) neural networks for automatic target recognition. An FBF network can automatically separate the disconnected but interleaved spirals that Minsky and Papert introduced in their book Perceptrons. The network's design also clarifies why humans cannot rapidly separate interleaved spirals, yet can rapidly detect conjunctions of disparity and color, or of disparity and motion, that distinguish target figures from surrounding distractors. Figure-ground separation is accomplished by iterating operations of a Feature Contour System (FCS) and a Boundary Contour System (BCS) in the order FCS-BCS-FCS, hence the term FBF, that have been derived from an analysis of biological vision. The FCS operations include the use of nonlinear shunting networks to compensate for variable illumination and nonlinear diffusion networks to control filling-in. A key new feature of an FBF network is the use of filling-in for figure-ground separation. The BCS operations include oriented filters joined to competitive and cooperative interactions designed to detect, regularize, and complete boundaries in up to 50 percent noise, while suppressing the noise. A modified CORT-X filter is described which uses both on-cells and off-cells to generate a boundary segmentation from a noisy image.
Resumo:
Background: Most cardiovascular disease (CVD) occurs in the presence of traditional risk factors, including hypertension and dyslipidemia, and these in turn are influenced by behavioural factors such as diet and lifestyle. Previous research has identified a group at low risk of CVD based on a cluster of inter-related factors: body mass index (BMI) < 25 Kg/m2, moderate exercise, alcohol intake, non-smoking and a favourable dietary pattern. The objective of this study was to determine whether these factors are associated with a reduced prevalence of hypertension and dyslipidemia in an Irish adult population. Methods: The study was a cross-sectional survey of 1018 men and women sampled from 17 general practices. Participants completed health, lifestyle and food frequency questionnaires and provided fasting blood samples for analysis of glucose and insulin. We defined a low risk group based on the following protective factors: BMI <25 kg/m2; waist-hip ratio (WHR) <0.85 for women and <0.90 for men; never smoking status; participants with medium to high levels of physical activity; light alcohol consumption (3.5–7 units of alcohol/week) and a "prudent" diet. Dietary patterns were assessed by cluster analysis. Results: We found strong significant inverse associations between the number of protective factors and systolic blood pressure, diastolic blood pressure and dyslipidemia. The prevalence odds ratio of hypertension in persons with 1, 2, 3, ≥ 4 protective factors relative to those with none, were 1.0, 0.76, 0.68 and 0.34 (trend p < 0.01). The prevalence odds ratio of dyslipidemia in persons with 1, 2, 3, ≥ 4 protective factors relative to those with none were 0.83, 0.98, 0.49 and 0.24 (trend p = 0.001). Conclusion: Our findings of a strong inverse association between low risk behaviours and two of the traditional risk factors for CVD highlight the importance of 'the causes of the causes' and the potential for behaviour modification in CVD prevention at a population level.
Resumo:
Contemporary Irish data on the prevalence of major cardiovascular disease (CVD) risk factors are sparse. The primary aims of this study were (1) to estimate the prevalence of major cardiovascular disease risk factors, including Type 2 Diabetes Mellitus, in the general population of men and women between the ages of 50 and 69 years; and (2) to estimate the proportion of individuals in this age group at high absolute risk of cardiovascular disease events on the basis of pre-existing cardiovascular disease or as defined by the Framingham equation. Participants were drawn from the practice lists of 17 general practices in Cork and Kerry using stratified random sampling. A total of 1018 people attended for screening (490 men, 48%) from 1473 who were invited, a response rate of 69.1%. Cardiovascular disease risk factors and glucose intolerance are common in the population of men and women aged between 50 and 69 years. Almost half the participants were overweight and a further quarter met current international criteria for obesity, one of the highest recorded prevalence rates for obesity in a European population sample. Forty per cent of the population reported minimal levels of physical activity and 19% were current cigarette smokers. Approximately half the sample had blood pressure readings consistent with international criteria for the diagnosis of hypertension, but only 38% of these individuals were known to be hypertensive. Eighty per cent of the population sample had a cholesterol concentration in excess of 5 mmol/l. Almost 4% of the population had Type 2 Diabetes Mellitus, of whom 30% were previously undiagnosed. A total of 137 participants (13.5%) had a history or ECG findings consistent with established cardiovascular disease. Of the remaining 881 individuals in the primary prevention population, a total of 20 high-risk individuals (19 male) had a risk of a coronary heart disease event 30% over ten years according to the Framingham risk equation, giving an overall population prevalence of 2.0% (95% CI 1.3 - 3.0). At a risk level 20% over ten years, an additional 91 individuals (8.9%) were identified. Thus a total of 24.4% of the population were at risk either through pre-existing CVD (13.5%) or an estimated 10-year risk exceeding 20% according to the Framingham risk equation (10.9%). Thus a substantial proportion of middle-aged men are at high risk of CVD. The findings emphasise the scale of the CVD epidemic in Ireland and the need for ongoing monitoring of risk factors at the population level and the need to develop preventive strategies at both the clinical and societal level.
Resumo:
BACKGROUND: Serologic methods have been used widely to test for celiac disease and have gained importance in diagnostic definition and in new epidemiologic findings. However, there is no standardization, and there are no reference protocols and materials. METHODS: The European working group on Serological Screening for Celiac Disease has defined robust noncommercial test protocols for immunoglobulin (Ig)G and IgA gliadin antibodies and for IgA autoantibodies against endomysium and tissue transglutaminase. Standard curves were linear in the decisive range, and intra-assay variation coefficients were less than 5% to 10%. Calibration was performed with a group reference serum. Joint cutoff limits were used. Seven laboratories took part in the final collaborative study on 252 randomized sera classified by histology (103 pediatric and adult patients with active celiac disease, 89 disease control subjects, and 60 blood donors). RESULTS: IgA autoantibodies against endomysium and tissue transglutaminase rendered superior sensitivity (90% and 93%, respectively) and specificity (99% and 95%, respectively) over IgA and IgG gliadin antibodies. Tissue transglutaminase antibody testing showed superior receiver operating characteristic performance compared with gliadin antibodies. The K values for interlaboratory reproducibility showed superiority for IgA endomysium (0.93) in comparison with tissue transglutaminase antibodies (0.83) and gliadin antibodies (0.82 for IgG, 0.62 for IgA). CONCLUSIONS: Basic criteria of standardization and quality assessment must be fulfilled by any given test protocol proposed for serologic investigation of celiac disease. The working group has produced robust test protocols and reference materials available for standardization to further improve reliability of serologic testing for celiac disease.
Resumo:
A number of different interferon-gamma ELISpot protocols are in use in laboratories studying antigen-specific immune responses. It is therefore unclear how results from different assays compare, and what factors most significantly influence assay outcome. One such difference is that some laboratories use a short in vitro stimulation period of cells before they are transferred to the ELISpot plate; this is commonly done in the case of frozen cells, in order to enhance assay sensitivity. Other differences that may be significant include antibody coating of plates, the use of media with or without serum, the serum source and the number of cells added to the wells. The aim of this paper was to identify which components of the different ELISpot protocols influenced assay sensitivity and inter-laboratory variation. Four laboratories provided protocols for quantifying numbers of interferon-gamma spot forming cells in human peripheral blood mononuclear cells stimulated with Mycobacterium tuberculosis derived antigens. The differences in the protocols were compared directly. We found that several sources of variation in assay protocols can be eliminated, for example by avoiding serum supplementation and using AIM-V serum free medium. In addition, the number of cells added to ELISpot wells should also be standardised. Importantly, delays in peripheral blood mononuclear cell processing before stimulation had a marked effect on the number of detectable spot forming cells; processing delay thus should be minimised as well as standardised. Finally, a pre-stimulation culture period improved the sensitivity of the assay, however this effect may be both antigen and donor dependent. In conclusion, small differences in ELISpot protocols in routine use can affect the results obtained and care should be given to conditions selected for use in a given study. A pre-stimulation step may improve the sensitivity of the assay, particularly when cells have been previously frozen.
Resumo:
BACKGROUND: HER-2/neu status was determined by immunohistochemistry (IHC) and fluorescence in situ hybridisation (FISH) methods in more than 300 paraffin-embedded primary breast cancer samples. MATERIALS AND METHODS: HER-2/neu status was determined by FISH using the PathVysion kit (Vysis) and by IHC using either a monoclonal antibody CB11 or a cocktail of antibodies: the monoclonal TAB250 and the polyclonal pAb1. RESULTS: Of the 324 cases evaluable by IHC, 65 out of 318 (20%) and 24 out of 324 (7%) were scored as positive when using the antibody cocktail and the CB11, respectively. HER-2/neu gene amplification occured in 64 out of 324 cases (20%). Concordance of FISH and IHC was found in 285 out of 318 cases (90%) and 278 out of 324 cases (86%) using the cocktail and the CB11, respectively. CONCLUSION: The cost-effectiveness analysis revealed that the use of a sensitive IHC method followed by confirmation of positive results by FISH considerably decreased the FISH costs and may become standard practice for HER-2/neu evaluation.