883 resultados para Cluster Computer
Resumo:
The finite element method is used to simulate coupled problems, which describe the related physical and chemical processes of ore body formation and mineralization, in geological and geochemical systems. The main purpose of this paper is to illustrate some simulation results for different types of modelling problems in pore-fluid saturated rock masses. The aims of the simulation results presented in this paper are: (1) getting a better understanding of the processes and mechanisms of ore body formation and mineralization in the upper crust of the Earth; (2) demonstrating the usefulness and applicability of the finite element method in dealing with a wide range of coupled problems in geological and geochemical systems; (3) qualitatively establishing a set of showcase problems, against which any numerical method and computer package can be reasonably validated. (C) 2002 Published by Elsevier Science B.V.
Resumo:
The first 'Australian Cluster Workshop' was held at the Australia Telescope National Facility in Sydney on 2001 February 6. The aim of the workshop was to bring together the many and varied groups working on clusters of galaxies in Australia, to forge newmulti-disciplinary links, and to generate enthusiasm and support for new cluster work and further cluster meetings in Australia. In this paper I present a summary of the workshop as well as some additional review material intended to place current Australian research in a broader perspective, looking ahead to the major issues still to be addressed.
Resumo:
The Fornax Cluster Spectroscopic Survey (FCSS) project utilizes the Two-degree Field (2dF) multi-object spectrograph on the Anglo-Australian Telescope (AAT). Its aim is to obtain spectra for a complete sample of all 14 000 objects with 16 5 less than or equal to b(j) less than or equal to 19 7 irrespective of their morphology in a 12 deg(2) area centred on the Fornax cluster. A sample of 24 Fornax cluster members has been identified from the first 2dF field (3.1 deg(2) in area) to be completed. This is the first complete sample of cluster objects of known distance with well-defined selection limits. Nineteen of the galaxies (with -15.8 < M-B < 12.7) appear to be conventional dwarf elliptical (dE) or dwarf S0 (dS0) galaxies. The other five objects (with -13.6 < M-B < 11.3) are those galaxies which were described recently by Drinkwater et al. and labelled 'ultracompact dwarfs' (UCDs). A major result is that the conventional dwarfs all have scale sizes alpha greater than or similar to 3 arcsec (similar or equal to300 pc). This apparent minimum scale size implies an equivalent minimum luminosity for a dwarf of a given surface brightness. This produces a limit on their distribution in the magnitude-surface brightness plane, such that we do not observe dEs with high surface brightnesses but faint absolute magnitudes. Above this observed minimum scale size of 3 arcsec, the dEs and dS0s fill the whole area of the magnitude-surface brightness plane sampled by our selection limits. The observed correlation between magnitude and surface brightness noted by several recent studies of brighter galaxies is not seen with our fainter cluster sample. A comparison of our results with the Fornax Cluster Catalog (FCC) of Ferguson illustrates that attempts to determine cluster membership solely on the basis of observed morphology can produce significant errors. The FCC identified 17 of the 24 FCSS sample (i.e. 71 per cent) as being 'cluster' members, in particular missing all five of the UCDs. The FCC also suffers from significant contamination: within the FCSS's field and selection limits, 23 per cent of those objects described as cluster members by the FCC are shown by the FCSS to be background objects.
Ultra-compact dwarf galaxies: a new class of compact stellar system discovered in the Fornax Cluster
Resumo:
We have used the 2dF spectrograph on the Anglo-Australian Telescope to obtain a complete spectroscopic sample of all objects in the magnitude range, 16.5 < bj < 19.8, regardless of morphology, in an area centred on the Fornax Cluster of galaxies. Among the unresolved targets are five objects which are members of the Fornax Cluster. They are extremely compact stellar systems with scale lengths less than 40 parsecs. These ultra-compact dwarfs are unlike any known type of stellar system, being more compact and significantly less luminous than other compact dwarf galaxies, yet much brighter than any globular cluster.
Resumo:
We aimed to study patterns of variation and factors influencing the evolutionary dynamics of a satellite DNA, pBuM, in all seven Drosophila species from the buzzatii cluster (repleta group). We analyzed 117 alpha pBuM-1 (monomer length 190 bp) and 119 composite alpha/beta (370 bp) pBuM-2 repeats and determined the chromosome location and long-range organization on DNA fibers of major sequence variants. Such combined methodologies in the study of satDNAs have been used in very few organisms. In most species, concerted evolution is linked to high copy number of pBuM repeats. Species presenting low-abundance and scattered distributed pBuM repeats did not undergo concerted evolution and maintained part of the ancestral inter-repeat variability. The alpha and alpha/beta repeats colocalized in heterochromatic regions and were distributed on multiple chromosomes, with notable differences between species. High-resolution FISH revealed array sizes of a few kilobases to over 0.7 Mb and mutual arrangements of alpha and alpha/beta repeats along the same DNA fibers, but with considerable changes in the amount of each variant across species. From sequence, chromosomal and phylogenetic data, we could infer that homogenization and amplification events involved both new and ancestral pBuM variants. Altogether, the data on the structure and organization of the pBuM satDNA give insights into genome evolution including mechanisms that contribute to concerted evolution and diversification.
Resumo:
We have measured nucleotide variation in the CLOCK/CYCLE heterodimer inhibition domain (CCID) of the clock X-linked gene period in seven species belonging to the Drosophila buzzatii cluster, namely D. buzzatii, Drosophila koepferae, Drosophila antonietae, Drosophila serido, Drosophila gouveai, Drosophila seriema and Drosophila borborema. We detected that the purifying selection is the main force driving the sequence evolution in period, in agreement with the important role of CCID in clock machinery. Our survey revealed that period provides valuable phylogenetic information that allowed to resolve phylogenetic relationships among D. gouveai, D. borborema and D. seriema, which composed a polytomic clade in preliminary studies. The analysis of patterns of intraspecific variation revealed two different lineages of period in D. koepferae, probably reflecting introgressive hybridization from D. buzzatii, in concordance with previous molecular data.
Resumo:
Drosophila antonietae and Drosophila gouveai are allopatric, cactophilic, cryptic and endemic of South America species, which aedeagus morphology is considered the main diagnostic character. In this work, single close populations from the edge distributions of each species, located in an ""introgressive corridor"", were analyzed regarding temporal isozenzymatic genetic variability. Isocitrate dehydrogenase (Idh) appeared as a diagnostic locus between D. antonieate and D. gouveai because each population was fixed for different alleles. Moreover, several polymorphic loci showed accentuated divergence in the allele frequency, as evidenced by Nei`s l(0.3188) and D (1.1432), and also by Reynolds` genetic distance and identity (1.3207 and 0.7331, respectively). Our results showed that, in spite of the very similar external morphology, related evolutionary histories, close distributions, and events of introgression in the studied area, these cryptic species have high allozymatic differentiation, and this is discussed here. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Some patients are no longer able to communicate effectively or even interact with the outside world in ways that most of us take for granted. In the most severe cases, tetraplegic or post-stroke patients are literally `locked in` their bodies, unable to exert any motor control after, for example, a spinal cord injury or a brainstem stroke, requiring alternative methods of communication and control. But we suggest that, in the near future, their brains may offer them a way out. Non-invasive electroencephalogram (EEG)-based brain-computer interfaces (BCD can be characterized by the technique used to measure brain activity and by the way that different brain signals are translated into commands that control an effector (e.g., controlling a computer cursor for word processing and accessing the internet). This review focuses on the basic concepts of EEG-based BC!, the main advances in communication, motor control restoration and the down-regulation of cortical activity, and the mirror neuron system (MNS) in the context of BCI. The latter appears to be relevant for clinical applications in the coming years, particularly for severely limited patients. Hypothetically, MNS could provide a robust way to map neural activity to behavior, representing the high-level information about goals and intentions of these patients. Non-invasive EEG-based BCIs allow brain-derived communication in patients with amyotrophic lateral sclerosis and motor control restoration in patients after spinal cord injury and stroke. Epilepsy and attention deficit and hyperactive disorder patients were able to down-regulate their cortical activity. Given the rapid progression of EEG-based BCI research over the last few years and the swift ascent of computer processing speeds and signal analysis techniques, we suggest that emerging ideas (e.g., MNS in the context of BC!) related to clinical neuro-rehabilitation of severely limited patients will generate viable clinical applications in the near future.
Resumo:
Introduction This is a case report of a 39-year-old patient with a 14-year history of clinically refractory cluster headache (CH), also presenting obstructive sleep apnea (OSA) and complaining of tooth-grinding during sleep. Discussion Treatment of OSA with an intra-oral device allowed an immediate reduction in frequency and intensity of CH events. Furthermore, CH attacks did not occur during the 12-month follow-up period.
Resumo:
Objectives: Lung hyperinflation may be assessed by computed tomography (CT). As shown for patients with emphysema, however, CT image reconstruction affects quantification of hyperinflation. We studied the impact of reconstruction parameters on hyperinflation measurements in mechanically ventilated (MV) patients. Design: Observational analysis. Setting: A University hospital-affiliated research Unit. Patients: The patients were MV patients with injured (n = 5) or normal lungs (n = 6), and spontaneously breathing patients (n = 5). Interventions: None. Measurements and results: Eight image series involving 3, 5, 7, and 10 mm slices and standard and sharp filters were reconstructed from identical CT raw data. Hyperinflated (V-hyper), normally (V-normal), poorly (V-poor), and nonaerated (V-non) volumes were calculated by densitometry as percentage of total lung volume (V-total). V-hyper obtained with the sharp filter systematically exceeded that with the standard filter showing a median (interquartile range) increment of 138 (62-272) ml corresponding to approximately 4% of V-total. In contrast, sharp filtering minimally affected the other subvolumes (V-normal, V-poor, V-non, and V-total). Decreasing slice thickness also increased V-hyper significantly. When changing from 10 to 3 mm thickness, V-hyper increased by a median value of 107 (49-252) ml in parallel with a small and inconsistent increment in V-non of 12 (7-16) ml. Conclusions: Reconstruction parameters significantly affect quantitative CT assessment of V-hyper in MV patients. Our observations suggest that sharp filters are inappropriate for this purpose. Thin slices combined with standard filters and more appropriate thresholds (e.g., -950 HU in normal lungs) might improve the detection of V-hyper. Different studies on V-hyper can only be compared if identical reconstruction parameters were used.
Resumo:
Little consensus exists in the literature regarding methods for determination of the onset of electromyographic (EMG) activity. The aim of this study was to compare the relative accuracy of a range of computer-based techniques with respect to EMG onset determined visually by an experienced examiner. Twenty-seven methods were compared which varied in terms of EMG processing (low pass filtering at 10, 50 and 500 Hz), threshold value (1, 2 and 3 SD beyond mean of baseline activity) and the number of samples for which the mean must exceed the defined threshold (20, 50 and 100 ms). Three hundred randomly selected trials of a postural task were evaluated using each technique. The visual determination of EMG onset was found to be highly repeatable between days. Linear regression equations were calculated for the values selected by each computer method which indicated that the onset values selected by the majority of the parameter combinations deviated significantly from the visually derived onset values. Several methods accurately selected the time of onset of EMG activity and are recommended for future use. Copyright (C) 1996 Elsevier Science Ireland Ltd.
Resumo:
In the present study, the authors sought to determine whether the efficiency and cost-effectiveness of cognitive-behavioral treatment (CBT) for panic disorder could be improved by adjunctive computer-assisted therapy. Eighteen participants who met Diagnostic and Statistical Manual of Mental Disorders (3rd ed., revised; American Psychiatric Association, 1987) criteria for panic disorder were randomly assigned to a 12-session CBT (CBT12) condition (D. H. Barlow & M. G. Craske, 1989) or to a 4-session computer-assisted CBT (CBT4-CA) condition. Palmtop computers, with a program developed to incorporate basic principles of CBT, were used by CBT4-CA clients whenever they felt anxious or wanted to practice the therapy techniques and were used by all participants as a momentary assessment tool. CBT4-CA clients carried the computer at all times and continued to use it for 8 weeks after termination of therapy. Analyses of clinically significant change showed superiority of CBT12 at posttest on some measures; however, there were no differences at follow-up.
Resumo:
The absence of considerations of technology in policy studies reinforces the popular notion that technology is a neutral tool, Through an analysis of the role played by computers in the policy processes of Australia's Department of Social Security, this paper argues that computers are political players in policy processes, Findings indicate that computers make aspects of the social domain knowable and therefore governable, The use of computers makes previously infeasible policies possible, Computers also operate as bureaucrats and as agents of client surveillance. Increased policy change, reduced discretion and increasingly targeted and complex policies can be attributed to the use of computer technology, If policy processes are to be adequately understood and analysed, then the role of technology in those processes must be considered.
Resumo:
The identification, modeling, and analysis of interactions between nodes of neural systems in the human brain have become the aim of interest of many studies in neuroscience. The complex neural network structure and its correlations with brain functions have played a role in all areas of neuroscience, including the comprehension of cognitive and emotional processing. Indeed, understanding how information is stored, retrieved, processed, and transmitted is one of the ultimate challenges in brain research. In this context, in functional neuroimaging, connectivity analysis is a major tool for the exploration and characterization of the information flow between specialized brain regions. In most functional magnetic resonance imaging (fMRI) studies, connectivity analysis is carried out by first selecting regions of interest (ROI) and then calculating an average BOLD time series (across the voxels in each cluster). Some studies have shown that the average may not be a good choice and have suggested, as an alternative, the use of principal component analysis (PCA) to extract the principal eigen-time series from the ROI(s). In this paper, we introduce a novel approach called cluster Granger analysis (CGA) to study connectivity between ROIs. The main aim of this method was to employ multiple eigen-time series in each ROI to avoid temporal information loss during identification of Granger causality. Such information loss is inherent in averaging (e.g., to yield a single ""representative"" time series per ROI). This, in turn, may lead to a lack of power in detecting connections. The proposed approach is based on multivariate statistical analysis and integrates PCA and partial canonical correlation in a framework of Granger causality for clusters (sets) of time series. We also describe an algorithm for statistical significance testing based on bootstrapping. By using Monte Carlo simulations, we show that the proposed approach outperforms conventional Granger causality analysis (i.e., using representative time series extracted by signal averaging or first principal components estimation from ROIs). The usefulness of the CGA approach in real fMRI data is illustrated in an experiment using human faces expressing emotions. With this data set, the proposed approach suggested the presence of significantly more connections between the ROIs than were detected using a single representative time series in each ROI. (c) 2010 Elsevier Inc. All rights reserved.