928 resultados para CALCULATED LEVELS, J, PI USING SHELL MODEL
Resumo:
The level structures of the N = 50 As-83, Ge-82, and Ga-81 isotones have been investigated by means of multi-nucleon transfer reactions. A first experiment was performed with the CLARA PRISMA setup to identify these nuclei. A second experiment was carried out with the GASP array in order to deduce the gamma-ray coincidence information. The results obtained on the high-spin states of such nuclei are used to test the stability of the N = 50 shell closure in the region of Ni-78 (Z = 28). The comparison of the experimental level schemes with the shell-model calculations yields an N = 50 energy gap value of 4.7(3) MeV at Z = 28. This value, in a good agreement with the prediction of the finite-range liquid-drop model as well as with the recent large-scale shell model calculations, does not support a weakening of the N = 50 shell gap down to Z = 28. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
RATIONALE AND OBJECTIVES: Dose reduction may compromise patients because of a decrease of image quality. Therefore, the amount of dose savings in new dose-reduction techniques needs to be thoroughly assessed. To avoid repeated studies in one patient, chest computed tomography (CT) scans with different dose levels were performed in corpses comparing model-based iterative reconstruction (MBIR) as a tool to enhance image quality with current standard full-dose imaging. MATERIALS AND METHODS: Twenty-five human cadavers were scanned (CT HD750) after contrast medium injection at different, decreasing dose levels D0-D5 and respectively reconstructed with MBIR. The data at full-dose level, D0, have been additionally reconstructed with standard adaptive statistical iterative reconstruction (ASIR), which represented the full-dose baseline reference (FDBR). Two radiologists independently compared image quality (IQ) in 3-mm multiplanar reformations for soft-tissue evaluation of D0-D5 to FDBR (-2, diagnostically inferior; -1, inferior; 0, equal; +1, superior; and +2, diagnostically superior). For statistical analysis, the intraclass correlation coefficient (ICC) and the Wilcoxon test were used. RESULTS: Mean CT dose index values (mGy) were as follows: D0/FDBR = 10.1 ± 1.7, D1 = 6.2 ± 2.8, D2 = 5.7 ± 2.7, D3 = 3.5 ± 1.9, D4 = 1.8 ± 1.0, and D5 = 0.9 ± 0.5. Mean IQ ratings were as follows: D0 = +1.8 ± 0.2, D1 = +1.5 ± 0.3, D2 = +1.1 ± 0.3, D3 = +0.7 ± 0.5, D4 = +0.1 ± 0.5, and D5 = -1.2 ± 0.5. All values demonstrated a significant difference to baseline (P < .05), except mean IQ for D4 (P = .61). ICC was 0.91. CONCLUSIONS: Compared to ASIR, MBIR allowed for a significant dose reduction of 82% without impairment of IQ. This resulted in a calculated mean effective dose below 1 mSv.
Resumo:
This study aimed to determine the optimal intake of lysine and threonine for broiler breeder hens. Two experiments were conducted to evaluate the responses of birds to digestible lysine (Lys) and threonine (Thr). Eight treatments were assessed in both experiments, with six replicates of eight birds in the Lys experiment and ten birds in the Thr experiment. The dietary levels of Lys and Thr were obtained by a dilution technique. The experimental period was ten weeks for each amino acid studied, which included six weeks of adaptation and four weeks of data collection. The amino acid intake, egg mass and body weight were adjusted using a Reading model. Based on the model coefficients, the cost of the synthetic amino acids sources and the price of fertile eggs determined the intake of each amino acid to maximize. The minimum intake of Lys and Thr reduced egg production by 40 and 30%, respectively, the weight of the eggs decreased by 12 and 9% with the same intake of Lys and Thr, respectively. The models generated by predicting Lys and Thr intake were as follows: Lys=11 x E+31 x W and Thr=9.5 x E+32 x W, where E=egg mass, g/bird per day, and W=body weight, kg/bird. Based on the models, 3 kg birds with an egg mass production of 50 g/day require 643 mg/bird per day of Lys and 569 mg/bird per day of Thr. The optimum economic intake was calculated at 954 and 834 mg/bird per day for Lys and Thr, respectively, reflecting a dietary concentration of 0.636% Lys and 0.556% Thr for a feed intake of 150 g/bird per day. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
We propose a model for D(+)->pi(+)pi(-)pi(+) decays following experimental results which indicate that the two-pion interaction in the S wave is dominated by the scalar resonances f(0)(600)/sigma and f(0)(980). The weak decay amplitude for D(+)-> R pi(+), where R is a resonance that subsequently decays into pi(+)pi(-), is constructed in a factorization approach. In the S wave, we implement the strong decay R ->pi(+)pi(-) by means of a scalar form factor. This provides a unitary description of the pion-pion interaction in the entire kinematically allowed mass range m(pi pi)(2) from threshold to about 3 GeV(2). In order to reproduce the experimental Dalitz plot for D(+)->pi(+)pi(-)pi(+), we include contributions beyond the S wave. For the P wave, dominated by the rho(770)(0), we use a Breit-Wigner description. Higher waves are accounted for by using the usual isobar prescription for the f(2)(1270) and rho(1450)(0). The major achievement is a good reproduction of the experimental m(pi pi)(2) distribution, and of the partial as well as the total D(+)->pi(+)pi(-)pi(+) branching ratios. Our values are generally smaller than the experimental ones. We discuss this shortcoming and, as a by-product, we predict a value for the poorly known D ->sigma transition form factor at q(2)=m pi(2).
Resumo:
The threat of impact or explosive loads is regrettably a scenario to be taken into account in the design of lifeline or critical civilian buildings. These are often made of concrete and not specifically designed for military threats. Numerical simulation of such cases may be undertaken with the aid of state of the art explicit dynamic codes, however several difficult challenges are inherent to such models: the material modeling for the concrete anisotropic failure, consideration of reinforcement bars and important structural details, adequate modeling of pressure waves from explosions in complex geometries, and efficient solution to models of complete buildings which can realistically assess failure modes. In this work we employ LS-Dyna for calculation, with Lagrangian finite elements and explicit time integration. Reinforced concrete may be represented in a fairly accurate fashion with recent models such as CSCM model [1] and segregated rebars constrained within the continuum mesh. However, such models cannot be realistically employed for complete models of large buildings, due to limitations of time and computer resources. The use of structural beam and shell elements for this purpose would be the obvious solution, with much lower computational cost. However, this modeling requires careful calibration in order to reproduce adequately the highly nonlinear response of structural concrete members, including bending with and without compression, cracking or plastic crushing, plastic deformation of reinforcement, erosion of vanished elements etc. The main objective of this work is to provide a strategy for modeling such scenarios based on structural elements, using available material models for structural elements [2] and techniques to include the reinforcement in a realistic way. These models are calibrated against fully three-dimensional models and shown to be accurate enough. At the same time they provide the basis for realistic simulation of impact and explosion on full-scale buildings
Resumo:
We forecast quarterly US inflation based on the generalized Phillips curve using econometric methods which incorporate dynamic model averaging. These methods not only allow for coe¢ cients to change over time, but also allow for the entire forecasting model to change over time. We nd that dynamic model averaging leads to substantial forecasting improvements over simple benchmark regressions and more sophisticated approaches such as those using time varying coe¢ cient models. We also provide evidence on which sets of predictors are relevant for forecasting in each period.
Resumo:
We forecast quarterly US inflation based on the generalized Phillips curve using econometric methods which incorporate dynamic model averaging. These methods not only allow for coe¢ cients to change over time, but also allow for the entire forecasting model to change over time. We nd that dynamic model averaging leads to substantial forecasting improvements over simple benchmark regressions and more sophisticated approaches such as those using time varying coe¢ cient models. We also provide evidence on which sets of predictors are relevant for forecasting in each period.
Resumo:
A new technique for fixation of Biomphalaria glabrata for histologic studies is described. It consists in performing several external holes in the shell, before placing the entire snail into the fixative. It is a very practical and quick procedure that showed excellent results when compared to the usual techniques.
Resumo:
We propose a method for brain atlas deformation in the presence of large space-occupying tumors, based on an a priori model of lesion growth that assumes radial expansion of the lesion from its starting point. Our approach involves three steps. First, an affine registration brings the atlas and the patient into global correspondence. Then, the seeding of a synthetic tumor into the brain atlas provides a template for the lesion. The last step is the deformation of the seeded atlas, combining a method derived from optical flow principles and a model of lesion growth. Results show that a good registration is performed and that the method can be applied to automatic segmentation of structures and substructures in brains with gross deformation, with important medical applications in neurosurgery, radiosurgery, and radiotherapy.
Resumo:
The evaluation of radioactivity accidentally released into the atmosphere involves determining the radioactivity levels of rainwater samples. Rainwater scavenges atmospheric airborne radioactivity in such a way that surface contamination can be deduced from rainfall rate and rainwater radioactivity content. For this purpose, rainwater is usually collected in large surface collectors and then measured by gamma-spectrometry after such treatments as evaporation or iron hydroxide precipitation. We found that collectors can be adapted to accept large surface (diameter 47mm) cartridges containing a strongly acidic resin (Dowex AG 88) which is able to quantitatively extract radioactivity from rainwater, even during heavy rainfall. The resin can then be measured by gamma-spectrometry. The detection limit is 0.1Bq per sample of resin (80g) for (137)Cs. Natural (7)Be and (210)Pb can also be measured and the activity ratio of both radionuclides is comparable with those obtained through iron hydroxide precipitation and air filter measurements. Occasionally (22)Na has also been measured above the detection limit. A comparison between the evaporation method and the resin method demonstrated that 2/3 of (7)Be can be lost during the evaporation process. The resin method is simple and highly efficient at extracting radioactivity. Because of these great advantages, we anticipate it could replace former rainwater determination methods. Moreover, it does not necessitate the transportation of large rainwater volumes to the laboratory.
Resumo:
The objectives of this work were to estimate the genetic and phenotypic parameters and to predict the genetic and genotypic values of the selection candidates obtained from intraspecific crosses in Panicum maximum as well as the performance of the hybrid progeny of the existing and projected crosses. Seventy-nine intraspecific hybrids obtained from artificial crosses among five apomictic and three sexual autotetraploid individuals were evaluated in a clonal test with two replications and ten plants per plot. Green matter yield, total and leaf dry matter yields and leaf percentage were evaluated in five cuts per year during three years. Genetic parameters were estimated and breeding and genotypic values were predicted using the restricted maximum likelihood/best linear unbiased prediction procedure (REML/BLUP). The dominant genetic variance was estimated by adjusting the effect of full-sib families. Low magnitude individual narrow sense heritabilities (0.02-0.05), individual broad sense heritabilities (0.14-0.20) and repeatability measured on an individual basis (0.15-0.21) were obtained. Dominance effects for all evaluated characteristics indicated that breeding strategies that explore heterosis must be adopted. Less than 5% increase in the parameter repeatability was obtained for a three-year evaluation period and may be the criterion to determine the maximum number of years of evaluation to be adopted, without compromising gain per cycle of selection. The identification of hybrid candidates for future cultivars and of those that can be incorporated into the breeding program was based on the genotypic and breeding values, respectively. The prediction of the performance of the hybrid progeny, based on the breeding values of the progenitors, permitted the identification of the best crosses and indicated the best parents to use in crosses.
Resumo:
The objective of this study was to assess genotype by environment interaction for seed yield per plant in rapeseed cultivars grown in Northern Serbia by the AMMI (additive main effects and multiplicative interaction) model. The study comprised 19 rapeseed genotypes, analyzed in seven years through field trials arranged in a randomized complete block design, with three replicates. Seed yield per plant of the tested cultivars varied from 1.82 to 19.47 g throughout the seven seasons, with an average of 7.41 g. In the variance analysis, 72.49% of the total yield variation was explained by environment, 7.71% by differences between genotypes, and 19.09% by genotype by environment interaction. On the biplot, cultivars with high yield genetic potential had positive correlation with the seasons with optimal growing conditions, while the cultivars with lower yield potential were correlated to the years with unfavorable conditions. Seed yield per plant is highly influenced by environmental factors, which indicates the adaptability of specific genotypes to specific seasons.
Resumo:
The objective of this work was to evaluate the feasibility of simulating maize yield in a sub‑tropical region of southern Brazil using the general large area model (Glam). A 16‑year time series of daily weather data were used. The model was adjusted and tested as an alternative for simulating maize yield at small and large spatial scales. Simulated and observed grain yields were highly correlated (r above 0.8; p<0.01) at large scales (greater than 100,000 km²), with variable and mostly lower correlations (r from 0.65 to 0.87; p<0.1) at small spatial scales (lower than 10,000 km²). Large area models can contribute to monitoring or forecasting regional patterns of variability in maize production in the region, providing a basis for agricultural decision making, and Glam‑Maize is one of the alternatives.
Resumo:
La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.