922 resultados para expense


Relevância:

10.00% 10.00%

Publicador:

Resumo:

ABSTRACT ONTOLOGIES AND METHODS FOR INTEROPERABILITY OF ENGINEERING ANALYSIS MODELS (EAMS) IN AN E-DESIGN ENVIRONMENT SEPTEMBER 2007 NEELIMA KANURI, B.S., BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCES PILANI INDIA M.S., UNIVERSITY OF MASSACHUSETTS AMHERST Directed by: Professor Ian Grosse Interoperability is the ability of two or more systems to exchange and reuse information efficiently. This thesis presents new techniques for interoperating engineering tools using ontologies as the basis for representing, visualizing, reasoning about, and securely exchanging abstract engineering knowledge between software systems. The specific engineering domain that is the primary focus of this report is the modeling knowledge associated with the development of engineering analysis models (EAMs). This abstract modeling knowledge has been used to support integration of analysis and optimization tools in iSIGHT FD , a commercial engineering environment. ANSYS , a commercial FEA tool, has been wrapped as an analysis service available inside of iSIGHT-FD. Engineering analysis modeling (EAM) ontology has been developed and instantiated to form a knowledge base for representing analysis modeling knowledge. The instances of the knowledge base are the analysis models of real world applications. To illustrate how abstract modeling knowledge can be exploited for useful purposes, a cantilever I-Beam design optimization problem has been used as a test bed proof-of-concept application. Two distinct finite element models of the I-beam are available to analyze a given beam design- a beam-element finite element model with potentially lower accuracy but significantly reduced computational costs and a high fidelity, high cost, shell-element finite element model. The goal is to obtain an optimized I-beam design at minimum computational expense. An intelligent KB tool was developed and implemented in FiPER . This tool reasons about the modeling knowledge to intelligently shift between the beam and the shell element models during an optimization process to select the best analysis model for a given optimization design state. In addition to improved interoperability and design optimization, methods are developed and presented that demonstrate the ability to operate on ontological knowledge bases to perform important engineering tasks. One such method is the automatic technical report generation method which converts the modeling knowledge associated with an analysis model to a flat technical report. The second method is a secure knowledge sharing method which allocates permissions to portions of knowledge to control knowledge access and sharing. Both the methods acting together enable recipient specific fine grain controlled knowledge viewing and sharing in an engineering workflow integration environment, such as iSIGHT-FD. These methods together play a very efficient role in reducing the large scale inefficiencies existing in current product design and development cycles due to poor knowledge sharing and reuse between people and software engineering tools. This work is a significant advance in both understanding and application of integration of knowledge in a distributed engineering design framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: Respiratory motion causes substantial uncertainty in radiotherapy treatment planning. Four-dimensional computed tomography (4D-CT) is a useful tool to image tumor motion during normal respiration. Treatment margins can be reduced by targeting the motion path of the tumor. The expense and complexity of 4D-CT, however, may be cost-prohibitive at some facilities. We developed an image processing technique to produce images from cine CT that contain significant motion information without 4D-CT. The purpose of this work was to compare cine CT and 4D-CT for the purposes of target delineation and dose calculation, and to explore the role of PET in target delineation of lung cancer. Methods: To determine whether cine CT could substitute 4D-CT for small mobile lung tumors, we compared target volumes delineated by a physician on cine CT and 4D-CT for 27 tumors with intrafractional motion greater than 1 cm. We assessed dose calculation by comparing dose distributions calculated on respiratory-averaged cine CT and respiratory-averaged 4D-CT using the gamma index. A threshold-based PET segmentation model of size, motion, and source-to-background was developed from phantom scans and validated with 24 lung tumors. Finally, feasibility of integrating cine CT and PET for contouring was assessed on a small group of larger tumors. Results: Cine CT to 4D-CT target volume ratios were (1.05±0.14) and (0.97±0.13) for high-contrast and low-contrast tumors respectively which was within intraobserver variation. Dose distributions on cine CT produced good agreement (< 2%/1 mm) with 4D-CT for 71 of 73 patients. The segmentation model fit the phantom data with R2 = 0.96 and produced PET target volumes that matched CT better than 6 published methods (-5.15%). Application of the model to more complex tumors produced mixed results and further research is necessary to adequately integrate PET and cine CT for delineation. Conclusions: Cine CT can be used for target delineation of small mobile lesions with minimal differences to 4D-CT. PET, utilizing the segmentation model, can provide additional contrast. Additional research is required to assess the efficacy of complex tumor delineation with cine CT and PET. Respiratory-averaged cine CT can substitute respiratory-averaged 4D-CT for dose calculation with negligible differences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a lively debate on whether biodiversity conservation and agricultural production could be better reconciled by land sparing (strictly separating production fields and conservation areas) or by land sharing (combining both, agricultural production and biodiversity conservation on the same land). The debate originates from tropical countries, where agricultural land use continues to increase at the expense of natural ecosystems. But is it also relevant for Europe, where agriculture is withdrawing from marginal regions whilst farming of fertile lands continues to be intensified? Based on recent research on farmland biodiversity we conclude that the land sharing – land sparing dichotomy is too simplistic for Europe. Instead we differentiate between productive and marginal farmland. On productive farmland, semi-natural habitats are required to yield ecosystem services relevant for agriculture, to promote endangered farmland species which society wants to conserve even in intensively farmed regions, and to allow migration of non-farmland species through the agricultural matrix. On marginal farmland, high-nature value farming is a traditional way of land sharing, yielding high quality agricultural products and conserving specialized species. To conserve highly disturbance-sensitive species, there is a need for nature reserves. In conclusion, land sparing is not a viable olution for Europe in both productive and marginal farmland but because of different reasons in each type of farmland.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The inception of the Little Ice Age (~1400–1700 AD) is believed to have been driven by an interplay of external forcing and climate system internal variability. While the hemispheric signal seems to have been dominated by solar irradiance and volcanic eruptions, the understanding of mechanisms shaping the climate on a continental scale is less robust. In an ensemble of transient model simulations and a new type of sensitivity experiments with artificial sea ice growth, the authors identify a sea ice–ocean–atmosphere feedback mechanism that amplifies the Little Ice Age cooling in the North Atlantic–European region and produces the temperature pattern suggested by paleoclimatic reconstructions. Initiated by increasing negative forcing, the Arctic sea ice substantially expands at the beginning of the Little Ice Age. The excess of sea ice is exported to the subpolar North Atlantic, where it melts, thereby weakening convection of the ocean. Consequently, northward ocean heat transport is reduced, reinforcing the expansion of the sea ice and the cooling of the Northern Hemisphere. In the Nordic Seas, sea surface height anomalies cause the oceanic recirculation to strengthen at the expense of the warm Barents Sea inflow, thereby further reinforcing sea ice growth. The absent ocean–atmosphere heat flux in the Barents Sea results in an amplified cooling over Northern Europe. The positive nature of this feedback mechanism enables sea ice to remain in an expanded state for decades up to a century, favoring sustained cold periods over Europe such as the Little Ice Age. Support for the feedback mechanism comes from recent proxy reconstructions around the Nordic Seas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In several regions of the world, climate change is expected to have severe impacts on agricultural systems. Changes in land management are one way to adapt to future climatic conditions, including land-use changes and local adjustments of agricultural practices. In previous studies, options for adaptation have mostly been explored by testing alternative scenarios. Systematic explorations of land management possibilities using optimization approaches were so far mainly restricted to studies of land and resource management under constant climatic conditions. In this study, we bridge this gap and exploit the benefits of multi-objective regional optimization for identifying optimum land management adaptations to climate change. We design a multi-objective optimization routine that integrates a generic crop model and considers two climate scenarios for 2050 in a meso-scale catchment on the Swiss Central Plateau with already limited water resources. The results indicate that adaptation will be necessary in the study area to cope with a decrease in productivity by 0–10 %, an increase in soil loss by 25–35 %, and an increase in N-leaching by 30–45 %. Adaptation options identified here exhibit conflicts between productivity and environmental goals, but compromises are possible. Necessary management changes include (i) adjustments of crop shares, i.e. increasing the proportion of early harvested winter cereals at the expense of irrigated spring crops, (ii) widespread use of reduced tillage, (iii) allocation of irrigated areas to soils with low water-retention capacity at lower elevations, and (iv) conversion of some pre-alpine grasslands to croplands.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research examines the graduation rate experienced by students receiving public education services in the state of Texas. Special attention is paid to that subgroup of Texas students who meet Texas Education Agency criteria for handicapped status. The study is guided by two research questions: What are the high school completion rates experienced by handicapped and nonhandicapped students attending Texas public schools? and What are the predictors of graduation for handicapped and nonhandicapped students?^ In addition, the following hypotheses are explored. Hypothesis 1: Handicapped students attending a Texas public school will experience a lower rate of high school completion than their nonhandicapped counterparts. Hypothesis 2: Handicapped and nonhandicapped students attending school in a Texas public school with a budget above the median budget for Texas public schools will experience a higher rate of high school completion than similar students in Texas public schools with a budget below the median budget. Hypothesis 3: Handicapped and nonhandicapped students attending school in large Texas urban areas will experience a lower rate of high school completion than similar students in Texas public schools in rural areas. Hypothesis 4: Handicapped and nonhandicapped students attending a Texas public school in a county which rates above the state median for food stamps and AFDC recipients will experience a lower rate of high school completion than students living in counties below the median.^ The study will employ extant data from the records of the Texas Education Agency for the 1988-1989 and the 1989-1990 school years, from the Texas Department of Health for the years of 1989 and 1990, and from the 1980 Census.^ The study reveals that nonhandicapped students are graduating with a two year average rate of.906, while handicapped students following an Individualized Educational Program (IEP) achieve a two year average rate of.532, and handicapped students following the regular academic program present a two year average graduation rate of only.371. The presence of other handicapped students, and the school district's average expense per student are found to contribute significantly to the completion rates of handicapped students. Size groupings are used to elucidate the various impacts of these variables on different school districts and different student groups.^ Conclusions and implications are offered regarding the need to reach national consensus on the definition and computation of high school completion for both handicapped and nonhandicapped students, and the need for improved statewide tracking of handicapped completion rates. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While many studies have been conducted in mountainous catchments to examine the impact of climate change on hydrology, the interactions between climate changes and land use components have largely unknown impacts on hydrology in alpine regions. They need to be given special attention in order to devise possible strategies concerning general development in these regions. Thus, the main aim was to examine the impact of land use (i.e. bushland expansion) and climate changes (i.e. increase of temperature) on hydrology by model simulations. For this purpose, the physically based WaSiM-ETH model was applied to the catchment of Ursern Valley in the central Alps (191 km2) over the period of 1983−2005. Modelling results showed that the reduction of the mean monthly discharge during the summer period is due primarily to the retreat of snow discharge in time and secondarily to the reduction in the glacier surface area together with its retreat in time, rather than the increase in the evapotranspiration due to the expansion of the “green alder” on the expense of grassland. The significant decrease in summer discharge during July, August and September shows a change in the regime from b-glacio-nival to nivo-glacial. These changes are confirmed by the modeling results that attest to a temporal shift in snowmelt and glacier discharge towards earlier in the year: March, April and May for snowmelt and May and June for glacier discharge. It is expected that the yearly total discharge due to the land use changes will be reduced by 0.6% in the near future, whereas, it will be reduced by about 5% if climate change is also taken into account. Copyright © 2013 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Though IP multicast is resource ef£cient in delivering data to a group of members simultaneously, it suffers from scalability problem with the number of concurrently active multicast groups because it requires a router to keep forwarding state for every multicast tree passing through it. To solve this state scalability problem, we proposed a scheme, called aggregated multicast. The key idea is that multiple groups are forced to share a single delivery tree. In our earlier work, we introduced the basic concept of aggregated multicast and presented some initial results to show that multicast state can be reduced. In this paper, we develop a more quantitative assessment of the cost/bene£t trade-offs. We propose an algorithm to assign multicast groups to delivery trees with controllable cost and introduce metrics to measure multicast state and tree management overhead for multicast schemes. We then compare aggregated multicast with conventional multicast schemes, such as source speci£c tree scheme and shared tree scheme. Our extensive simulations show that aggregated multicast can achieve signi£cant routing state and tree management overhead reduction while containing the expense of extra resources (bandwidth waste and tunnelling overhead). We conclude that aggregated multicast is a very cost-effective and promising direction for scalable transit domain multicast provisioning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stable oxygen isotope composition of atmospheric precipitation (δ18Op) was scrutinized from 39 stations distributed over Switzerland and its border zone. Monthly amount-weighted δ18Op values averaged over the 1995–2000 period showed the expected strong linear altitude dependence (−0.15 to −0.22‰ per 100 m) only during the summer season (May–September). Steeper gradients (~ −0.56 to −0.60‰ per 100 m) were observed for winter months over a low elevation belt, while hardly any altitudinal difference was seen for high elevation stations. This dichotomous pattern could be explained by the characteristically shallower vertical atmospheric mixing height during winter season and provides empirical evidence for recently simulated effects of stratified atmospheric flow on orographic precipitation isotopic ratios. This helps explain "anomalous" deflected altitudinal water isotope profiles reported from many other high relief regions. Grids and isotope distribution maps of the monthly δ18Op have been calculated over the study region for 1995–1996. The adopted interpolation method took into account both the variable mixing heights and the seasonal difference in the isotopic lapse rate and combined them with residual kriging. The presented data set allows a point estimation of δ18Op with monthly resolution. According to the test calculations executed on subsets, this biannual data set can be extended back to 1992 with maintained fidelity and, with a reduced station subset, even back to 1983 at the expense of faded reliability of the derived δ18Op estimates, mainly in the eastern part of Switzerland. Before 1983, reliable results can only be expected for the Swiss Plateau since important stations representing eastern and south-western Switzerland were not yet in operation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The choice and duration of antiplatelet therapy for secondary prevention of coronary artery disease (CAD) is determined by the clinical context and treatment strategy. Oral antiplatelet agents for secondary prevention include the cyclo-oxygenase-1 inhibitor aspirin, and the ADP dependent P2Y12 inhibitors clopidogrel, prasugrel and ticagrelor. Aspirin constitutes the cornerstone in secondary prevention of CAD and is complemented by clopidogrel in patients with stable CAD undergoing percutaneous coronary intervention. Among patients with acute coronary syndrome, prasugrel and ticagrelor improve net clinical outcome by reducing ischaemic adverse events at the expense of an increased risk of bleeding as compared with clopidogrel. Prasugrel appears particularly effective among patients with ST elevation myocardial infarction to reduce the risk of stent thrombosis compared with clopidogrel, and offered a greater net clinical benefit among patients with diabetes compared with patients without diabetes. Ticagrelor is associated with reduced mortality without increasing the rate of coronary artery bypass graft (CABG)-related bleeding as compared with clopidogrel. Dual antiplatelet therapy should be continued for a minimum of 1 year among patients with acute coronary syndrome irrespective of stent type; among patients with stable CAD treated with new generation drug-eluting stents, available data suggest no benefit to prolong antiplatelet treatment beyond 6 months.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Up to one third of the general population suffers from symptoms caused by hemorrhoids. Conservative treatment comes first unless the patient presents with an acute hemorrhoidal prolapse or a thrombosis. A fiber enriched diet is the primary treatment option, recommended in the perioperative period as well as a long-term prophylaxis. A timely limited application of topical ointments or suppositories and/or flavonoids are further treatment options. When symptoms persist interventional procedures for grade I-II hemorrhoids, and surgery for grade III-IV hemorrhoids should be considered. Rubber band ligation is the interventional treatment of choice. A comparable efficacy using sclerosing or infrared therapy has not yet been demonstrated. We therefore do not recommend these treatment options for the cure of hemorrhoids. Self-treatment by anal insertion of bougies is of lowrisk and may be successful, particularly in the setting of an elevated sphincter pressure. Anal dilation, sphincterotomy, cryosurgery, bipolar diathermy, galvanic electrotherapy, and heat therapy should be regarded as obsolete given the poor or missing data reported for these methods. For a long time, the classic excisional hemorrhoidectomy was considered to be the gold standard as far as surgical procedures are concerned. Primary closure (Ferguson) seems to be superior compared to the "open" version (Milligan Morgan) with respect to postoperative pain and wound healing. The more recently proposed stapled hemorrhoidopexy (Longo) is particularly advisable for circular hemorrhoids. Compared to excisional hemorrhoidectomy the Longo-operation is associated with reduced postoperative pain, shorter operation time and hospital stay as well as a faster recovery, with the disadvantage though of a higher recurrence rate. Data from Hemorrhoidal Artery Ligation (HAL)-, if appropriate in combination with a Recto-Anal Repair (HAL/RAR)-, demonstrates a similar trend towards a better tolerance of the procedure at the expense of a higher recurrence rate. These relatively "new" procedures equally qualify for the treatment of grade III and IV hemorrhoids, and, in the case of stapled hemorrhoidopexy, may even be employed in the emergency situation of an acute anal prolapse. While under certain circumstances different treatment options are equivalent, there is a clear specificity with respect to the application of those procedures in other situations. The respective pros and cons need to be discussed separately with every patient. According to their own requirements a treatment strategy has to be defined according to their individual requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE The aim of this work is to derive a theoretical framework for quantitative noise and temporal fidelity analysis of time-resolved k-space-based parallel imaging methods. THEORY An analytical formalism of noise distribution is derived extending the existing g-factor formulation for nontime-resolved generalized autocalibrating partially parallel acquisition (GRAPPA) to time-resolved k-space-based methods. The noise analysis considers temporal noise correlations and is further accompanied by a temporal filtering analysis. METHODS All methods are derived and presented for k-t-GRAPPA and PEAK-GRAPPA. A sliding window reconstruction and nontime-resolved GRAPPA are taken as a reference. Statistical validation is based on series of pseudoreplica images. The analysis is demonstrated on a short-axis cardiac CINE dataset. RESULTS The superior signal-to-noise performance of time-resolved over nontime-resolved parallel imaging methods at the expense of temporal frequency filtering is analytically confirmed. Further, different temporal frequency filter characteristics of k-t-GRAPPA, PEAK-GRAPPA, and sliding window are revealed. CONCLUSION The proposed analysis of noise behavior and temporal fidelity establishes a theoretical basis for a quantitative evaluation of time-resolved reconstruction methods. Therefore, the presented theory allows for comparison between time-resolved parallel imaging methods and also nontime-resolved methods. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study aims at assessing the skill of several climate field reconstruction techniques (CFR) to reconstruct past precipitation over continental Europe and the Mediterranean at seasonal time scales over the last two millennia from proxy records. A number of pseudoproxy experiments are performed within the virtual reality ofa regional paleoclimate simulation at 45 km resolution to analyse different aspects of reconstruction skill. Canonical Correlation Analysis (CCA), two versions of an Analog Method (AM) and Bayesian hierarchical modeling (BHM) are applied to reconstruct precipitation from a synthetic network of pseudoproxies that are contaminated with various types of noise. The skill of the derived reconstructions is assessed through comparison with precipitation simulated by the regional climate model. Unlike BHM, CCA systematically underestimates the variance. The AM can be adjusted to overcome this shortcoming, presenting an intermediate behaviour between the two aforementioned techniques. However, a trade-off between reconstruction-target correlations and reconstructed variance is the drawback of all CFR techniques. CCA (BHM) presents the largest (lowest) skill in preserving the temporal evolution, whereas the AM can be tuned to reproduce better correlation at the expense of losing variance. While BHM has been shown to perform well for temperatures, it relies heavily on prescribed spatial correlation lengths. While this assumption is valid for temperature, it is hardly warranted for precipitation. In general, none of the methods outperforms the other. All experiments agree that a dense and regularly distributed proxy network is required to reconstruct precipitation accurately, reflecting its high spatial and temporal variability. This is especially true in summer, when a specifically short de-correlation distance from the proxy location is caused by localised summertime convective precipitation events.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Auxin (IAA) is an important regulator of plant development and root differentiation. Although recent studies indicate that salicylic acid (SA) may also be important in this context by interfering with IAA signaling, comparatively little is known about its impact on the plant’s physiology, metabolism, and growth characteristics. Using carbon-11, a short-lived radioisotope (t 1/2 = 20.4 min) administered as 11CO2 to maize plants (B73), we measured changes in these functions using SA and IAA treatments. IAA application decreased total root biomass, though it increased lateral root growth at the expense of primary root elongation. IAA-mediated inhibition of root growth was correlated with decreased 11CO2 fixation, photosystem II (PSII) efficiency, and total leaf carbon export of 11C-photoassimilates and their allocation belowground. Furthermore, IAA application increased leaf starch content. On the other hand, SA application increased total root biomass, 11CO2 fixation, PSII efficiency, and leaf carbon export of 11C-photoassimilates, but it decreased leaf starch content. IAA and SA induction patterns were also examined after root-herbivore attack by Diabrotica virgifera to place possible hormone crosstalk into a realistic environmental context. We found that 4 days after infestation, IAA was induced in the midzone and root tip, whereas SA was induced only in the upper proximal zone of damaged roots. We conclude that antagonistic crosstalk exists between IAA and SA which can affect the development of maize plants, particularly through alteration of the root system’s architecture, and we propose that the integration of both signals may shape the plant’s response to environmental stress.