949 resultados para Resolution in azimuth direction
Resumo:
This paper describes the development and basic evaluation of decadal predictions produced using the HiGEM coupled climate model. HiGEM is a higher resolution version of the HadGEM1 Met Office Unified Model. The horizontal resolution in HiGEM has been increased to 1.25◦ × 0.83◦ in longitude and latitude for the atmosphere, and 1/3◦ × 1/3◦ globally for the ocean. The HiGEM decadal predictions are initialised using an anomaly assimilation scheme that relaxes anomalies of ocean temperature and salinity to observed anomalies. 10 year hindcasts are produced for 10 start dates (1960, 1965,..., 2000, 2005). To determine the relative contributions to prediction skill from initial conditions and external forcing, the HiGEM decadal predictions are compared to uninitialised HiGEM transient experiments. The HiGEM decadal predictions have substantial skill for predictions of annual mean surface air temperature and 100 m upper ocean temperature. For lead times up to 10 years, anomaly correlations (ACC) over large areas of the North Atlantic Ocean, the Western Pacific Ocean and the Indian Ocean exceed values of 0.6. Initialisation of the HiGEM decadal predictions significantly increases skill over regions of the Atlantic Ocean,the Maritime Continent and regions of the subtropical North and South Pacific Ocean. In particular, HiGEM produces skillful predictions of the North Atlantic subpolar gyre for up to 4 years lead time (with ACC > 0.7), which are significantly larger than the uninitialised HiGEM transient experiments.
Resumo:
Objective: To describe the composition of metabolic acidosis in patients with severe sepsis and septic shock at intensive care unit admission and throughout the first 5 days of intensive care unit stay. Design: Prospective, observational study. Setting: Twelve-bed intensive care unit. Patients: Sixty patients with either severe sepsis or septic shock. Interventions: None. Measurements and Main Results: Data were collected until 5 days after intensive care unit admission. We studied the contribution of inorganic ion difference, lactate, albumin, phosphate, and strong ion gap to metabolic acidosis. At admission, standard base excess was -6.69 +/- 4.19 mEq/L in survivors vs. -11.63 +/- 4.87 mEq/L in nonsurvivors (p < .05); inorganic ion difference (mainly resulting from hyperchloremia) was responsible for a decrease in standard base excess by 5.64 +/- 4.96 mEq/L in survivors vs. 8.94 +/- 7.06 mEq/L in nonsurvivors (p < .05); strong ion gap was responsible for a decrease in standard base excess by 4.07 +/- 3.57 mEq/L in survivors vs. 4.92 +/- 5.55 mEq/L in nonsurvivors with a nonsignificant probability value; and lactate was responsible for a decrease in standard base excess to 1.34 +/- 2.07 mEq/L in survivors vs. 1.61 +/- 2.25 mEq/L in nonsurvivors with a nonsignificant probability value. Albumin had an important alkalinizing effect in both groups; phosphate had a minimal acid-base effect. Acidosis in survivors was corrected during the study period as a result of a decrease in lactate and strong ion gap levels, whereas nonsurvivors did not correct their metabolic acidosis. In addition to Acute Physiology and Chronic Health Evaluation 11 score and serum creatinine level, inorganic ion difference acidosis magnitude at intensive care unit admission was independently associated with a worse outcome. Conclusions: Patients with severe sepsis and septic shock exhibit a complex metabolic acidosis at intensive care unit admission, caused predominantly by hyperchloremic acidosis, which was more pronounced in nonsurvivors. Acidosis resolution in survivors was attributable to a decrease in strong ion gap and lactate levels. (Crit Care Med 2009; 37:2733-2739)
Resumo:
NGC 6908, an S0 galaxy situated in the direction of NGC 6907, was only recently recognized as a distinct galaxy, instead of only a part of NGC 6907. We present 21-cm radio synthesis observations obtained with the Giant Metrewave Radio Telescope (GMRT) and optical images and spectroscopy obtained with the Gemini-North telescope of this pair of interacting galaxies. From the radio observations, we obtained the velocity field and the H I column density map of the whole region containing the NGC 6907/8 pair, and by means of the Gemini multi-object spectroscopy we obtained high-quality photometric images and 5 angstrom resolution spectra sampling the two galaxies. By comparing the rotation curve of NGC 6907 obtained from the two opposite sides around the main kinematic axis, we were able to distinguish the normal rotational velocity field from the velocity components produced by the interaction between the two galaxies. Taking into account the rotational velocity of NGC 6907 and the velocity derived from the absorption lines for NGC 6908, we verified that the relative velocity between these systems is lower than 60 km s(-1). The emission lines observed in the direction of NGC 6908, not typical of S0 galaxies, have the same velocity expected for the NGC 6907 rotation curve. Some emission lines are superimposed on a broader absorption profile, which suggests that they were not formed in NGC 6908. Finally, the H I profile exhibits details of the interaction, showing three components: one for NGC 6908, another for the excited gas in the NGC 6907 disc and a last one for the gas with higher relative velocities left behind NGC 6908 by dynamical friction, used to estimate the time when the interaction started in (3.4 +/- 0.6) x 10(7) yr ago.
Resumo:
In this work, considering the impact of a supernova remnant (SNR) with a neutral magnetized cloud we derived analytically a set of conditions that are favourable for driving gravitational instability in the cloud and thus star formation. Using these conditions, we have built diagrams of the SNR radius, R(SNR), versus the initial cloud density, n(c), that constrain a domain in the parameter space where star formation is allowed. This work is an extension to previous study performed without considering magnetic fields (Melioli et al. 2006, hereafter Paper I). The diagrams are also tested with fully three-dimensional MHD radiative cooling simulations involving a SNR and a self-gravitating cloud and we find that the numerical analysis is consistent with the results predicted by the diagrams. While the inclusion of a homogeneous magnetic field approximately perpendicular to the impact velocity of the SNR with an intensity similar to 1 mu G within the cloud results only a small shrinking of the star formation zone in the diagram relative to that without magnetic field, a larger magnetic field (similar to 10 mu G) causes a significant shrinking, as expected. Though derived from simple analytical considerations these diagrams provide a useful tool for identifying sites where star formation could be triggered by the impact of a supernova blast wave. Applications of them to a few regions of our own Galaxy (e.g. the large CO shell in the direction of Cassiopeia, and the Edge Cloud 2 in the direction of the Scorpious constellation) have revealed that star formation in those sites could have been triggered by shock waves from SNRs for specific values of the initial neutral cloud density and the SNR radius. Finally, we have evaluated the effective star formation efficiency for this sort of interaction and found that it is generally smaller than the observed values in our own Galaxy (SFE similar to 0.01-0.3). This result is consistent with previous work in the literature and also suggests that the mechanism presently investigated, though very powerful to drive structure formation, supersonic turbulence and eventually, local star formation, does not seem to be sufficient to drive global star formation in normal star-forming galaxies, not even when the magnetic field in the neutral clouds is neglected.
Resumo:
Broad-scale phylogenetic analyses of the angiosperms and of the Asteridae have failed to confidently resolve relationships among the major lineages of the campanulid Asteridae (i.e., the euasterid II of APG II, 2003). To address this problem we assembled presently available sequences for a core set of 50 taxa, representing the diversity of the four largest lineages (Apiales, Aquifoliales, Asterales, Dipsacales) as well as the smaller ""unplaced"" groups (e.g., Bruniaceae, Paracryphiaceae, Columelliaceae). We constructed four data matrices for phylogenetic analysis: a chloroplast coding matrix (atpB, matK, ndhF, rbcL), a chloroplast non-coding matrix (rps16 intron, trnT-F region, trnV-atpE IGS), a combined chloroplast dataset (all seven chloroplast regions), and a combined genome matrix (seven chloroplast regions plus 18S and 26S rDNA). Bayesian analyses of these datasets using mixed substitution models produced often well-resolved and supported trees. Consistent with more weakly supported results from previous studies, our analyses support the monophyly of the four major clades and the relationships among them. Most importantly, Asterales are inferred to be sister to a clade containing Apiales and Dipsacales. Paracryphiaceae is consistently placed sister to the Dipsacales. However, the exact relationships of Bruniaceae, Columelliaceae, and an Escallonia clade depended upon the dataset. Areas of poor resolution in combined analyses may be partly explained by conflict between the coding and non-coding data partitions. We discuss the implications of these results for our understanding of campanulid phylogeny and evolution, paying special attention to how our findings bear on character evolution and biogeography in Dipsacales.
Resumo:
The most significant radiation field nonuniformity is the well-known Heel effect. This nonuniform beam effect has a negative influence on the results of computer-aided diagnosis of mammograms, which is frequently used for early cancer detection. This paper presents a method to correct all pixels in the mammography image according to the excess or lack on radiation to which these have been submitted as a result of the this effect. The current simulation method calculates the intensities at all points of the image plane. In the simulated image, the percentage of radiation received by all the points takes the center of the field as reference. In the digitized mammography, the percentages of the optical density of all the pixels of the analyzed image are also calculated. The Heel effect causes a Gaussian distribution around the anode-cathode axis and a logarithmic distribution parallel to this axis. Those characteristic distributions are used to determine the center of the radiation field as well as the cathode-anode axis, allowing for the automatic determination of the correlation between these two sets of data. The measurements obtained with our proposed method differs on average by 2.49 mm in the direction perpendicular to the anode-cathode axis and 2.02 mm parallel to the anode-cathode axis of commercial equipment. The method eliminates around 94% of the Heel effect in the radiological image and the objects will reflect their x-ray absorption. To evaluate this method, experimental data was taken from known objects, but could also be done with clinical and digital images.
Resumo:
A new method for determining the temporal evolution of plasma rotation is reported in this work. The method is based upon the detection of two different portions of the spectral profile of a plasma impurity line, using a monochromator with two photomultipliers installed at the exit slits. The plasma rotation velocity is determined by the ratio of the two detected signals. The measured toroidal rotation velocities of C III (4647.4 angstrom) and C VI (5290.6 angstrom), at different radial positions in TCABR discharges, show good agreement, within experimental uncertainty, with previous results (Severo et al 2003 Nucl. Fusion 43 1047). In particular, they confirm that the plasma core rotates in the direction opposite to the plasma current, while near the plasma edge (r/a > 0.9) the rotation is in the same direction. This technique was also used to investigate the dependence of toroidal rotation on the poloidal position of gas puffing. The results show that there is no dependence for the plasma core, while for plasma edge (r/a > 0.9) some dependence is observed.
Resumo:
This paper presents the use of a multiprocessor architecture for the performance improvement of tomographic image reconstruction. Image reconstruction in computed tomography (CT) is an intensive task for single-processor systems. We investigate the filtered image reconstruction suitability based on DSPs organized for parallel processing and its comparison with the Message Passing Interface (MPI) library. The experimental results show that the speedups observed for both platforms were increased in the same direction of the image resolution. In addition, the execution time to communication time ratios (Rt/Rc) as a function of the sample size have shown a narrow variation for the DSP platform in comparison with the MPI platform, which indicates its better performance for parallel image reconstruction.
Resumo:
The ever increasing spurt in digital crimes such as image manipulation, image tampering, signature forgery, image forgery, illegal transaction, etc. have hard pressed the demand to combat these forms of criminal activities. In this direction, biometrics - the computer-based validation of a persons' identity is becoming more and more essential particularly for high security systems. The essence of biometrics is the measurement of person’s physiological or behavioral characteristics, it enables authentication of a person’s identity. Biometric-based authentication is also becoming increasingly important in computer-based applications because the amount of sensitive data stored in such systems is growing. The new demands of biometric systems are robustness, high recognition rates, capability to handle imprecision, uncertainties of non-statistical kind and magnanimous flexibility. It is exactly here that, the role of soft computing techniques comes to play. The main aim of this write-up is to present a pragmatic view on applications of soft computing techniques in biometrics and to analyze its impact. It is found that soft computing has already made inroads in terms of individual methods or in combination. Applications of varieties of neural networks top the list followed by fuzzy logic and evolutionary algorithms. In a nutshell, the soft computing paradigms are used for biometric tasks such as feature extraction, dimensionality reduction, pattern identification, pattern mapping and the like.
Resumo:
Until recently, First-Order Temporal Logic (FOTL) has been only partially understood. While it is well known that the full logic has no finite axiomatisation, a more detailed analysis of fragments of the logic was not previously available. However, a breakthrough by Hodkinson et al., identifying a finitely axiomatisable fragment, termed the monodic fragment, has led to improved understanding of FOTL. Yet, in order to utilise these theoretical advances, it is important to have appropriate proof techniques for this monodic fragment.In this paper, we modify and extend the clausal temporal resolution technique, originally developed for propositional temporal logics, to enable its use in such monodic fragments. We develop a specific normal form for monodic formulae in FOTL, and provide a complete resolution calculus for formulae in this form. Not only is this clausal resolution technique useful as a practical proof technique for certain monodic classes, but the use of this approach provides us with increased understanding of the monodic fragment. In particular, we here show how several features of monodic FOTL can be established as corollaries of the completeness result for the clausal temporal resolution method. These include definitions of new decidable monodic classes, simplification of existing monodic classes by reductions, and completeness of clausal temporal resolution in the case of monodic logics with expanding domains, a case with much significance in both theory and practice.
Resumo:
First-order temporal logic is a coincise and powerful notation, with many potential applications in both Computer Science and Artificial Intelligence. While the full logic is highly complex, recent work on monodic first-order temporal logics have identified important enumerable and even decidable fragments. In this paper we present the first resolution-based calculus for monodic first-order temporal logic. Although the main focus of the paper is on establishing completeness result, we also consider implementation issues and define a basic loop-search algorithm that may be used to guide the temporal resolution system.
Resumo:
We introduce a calculus of stratified resolution, in which special attention is paid to clauses that "define" relations. If such clauses are discovered in the initial set of clauses, they are treated using the rule of definition unfolding, i.e. the rule that replaces defined relations by their definitions. Stratified resolution comes with a powerful notion of redundancy: a clause to which definition unfolding has been applied can be removed from the search space. To prove the completeness of stratified resolution with redundancies, we use a novel combination of Bachmair and Ganzingerâ??s model construction technique and a hierarchical construction of orderings and least fixpoints.
Resumo:
In this paper, we show how the clausal temporal resolution technique developed for temporal logic provides an effective method for searching for invariants, and so is suitable for mechanising a wide class of temporal problems. We demonstrate that this scheme of searching for invariants can be also applied to a class of multi-predicate induction problems represented by mutually recursive definitions. Completeness of the approach, examples of the application of the scheme, and overview of the implementation are described.
Resumo:
In this paper we show how to extend clausal temporal resolution to the ground eventuality fragment of monodic first-order temporal logic, which has recently been introduced by Hodkinson, Wolter and Zakharyaschev. While a finite Hilbert-like axiomatization of complete monodic first order temporal logic was developed by Wolter and Zakharyaschev, we propose a temporal resolution-based proof system which reduces the satisfiability problem for ground eventuality monodic first-order temporal formulae to the satisfiability problem for formulae of classical first-order logic.
Resumo:
A posição majoritária da doutrina societária brasileira entende não ser possível o exercício parcial do direito de recesso. Os argumentos nesse sentido são: (i) não há previsão legal; (ii) o exercício parcial do direito de recesso constituiria forma de abuso de direito; (iii) a possibilidade geraria a aceitação de uma lógica incompatível, afinal: ou a deliberação prejudicou o interesse do acionista em continuar sócio da companhia, ou ele mantém tal interesse e não exerce o recesso. Contudo, as ideias defendidas nesse trabalho buscam descaracterizar esses argumentos, ao mesmo tempo em que pretendem demonstrar que o direito de recesso parcial é possível, permitido, legal e economicamente eficiente. Os argumentos construídos tentaram demonstrar que o recesso não é, necessariamente, uma forma de ruptura total do vínculo societário, não gerando reflexos única e exclusivamente sobre a qualidade de sócio do acionista. Deve-se perceber que o interesse protegido é dual: da companhia e do acionista. E esse interesse do acionista tanto pode ser uma readequação do seu investimento na companhia (decorrente da deliberação que gerou o direito de recesso), como a possibilidade de ruptura do vínculo social. Essa, entretanto, deve ser uma decisão que cabe única e exclusivamente a ele, no exercício de uma faculdade que a lei lhe confere. No mais, defende-se que afastar o recesso parcial significa impedir uma operação economicamente eficiente, visto que nenhuma parte é prejudicada pelo exercício do recesso com apenas parte das ações elegíveis.