919 resultados para Two-step model


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Transformers are very important elements of any power system. Unfortunately, they are subjected to through-faults and abnormal operating conditions which can affect not only the transformer itself but also other equipment connected to the transformer. Thus, it is essential to provide sufficient protection for transformers as well as the best possible selectivity and sensitivity of the protection. Nowadays microprocessor-based relays are widely used to protect power equipment. Current differential and voltage protection strategies are used in transformer protection applications and provide fast and sensitive multi-level protection and monitoring. The elements responsible for detecting turn-to-turn and turn-to-ground faults are the negative-sequence percentage differential element and restricted earth-fault (REF) element, respectively. During severe internal faults current transformers can saturate and slow down the speed of relay operation which affects the degree of equipment damage. The scope of this work is to develop a modeling methodology to perform simulations and laboratory tests for internal faults such as turn-to-turn and turn-to-ground for two step-down power transformers with capacity ratings of 11.2 MVA and 290 MVA. The simulated current waveforms are injected to a microprocessor relay to check its sensitivity for these internal faults. Saturation of current transformers is also studied in this work. All simulations are performed with the Alternative Transients Program (ATP) utilizing the internal fault model for three-phase two-winding transformers. The tested microprocessor relay is the SEL-487E current differential and voltage protection relay. The results showed that the ATP internal fault model can be used for testing microprocessor relays for any percentage of turns involved in an internal fault. An interesting observation from the experiments was that the SEL-487E relay is more sensitive to turn-to-turn faults than advertized for the transformers studied. The sensitivity of the restricted earth-fault element was confirmed. CT saturation cases showed that low accuracy CTs can be saturated with a high percentage of turn-to-turn faults, where the CT burden will affect the extent of saturation. Recommendations for future work include more accurate simulation of internal faults, transformer energization inrush, and other scenarios involving core saturation, using the newest version of the internal fault model. The SEL-487E relay or other microprocessor relays should again be tested for performance. Also, application of a grounding bank to the delta-connected side of a transformer will increase the zone of protection and relay performance can be tested for internal ground faults on both sides of a transformer.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Stereoselectivity has to be considered for pharmacodynamic and pharmacokinetic features of ketamine. Stereoselective biotransformation of ketamine was investigated in equine microsomes in vitro. Concentration curves were constructed over time, and enzyme activity was determined for different substrate concentrations using equine liver and lung microsomes. The concentrations of R/S-ketamine and R/S-norketamine were determined by enantioselective capillary electrophoresis. A two-phase model based on Hill kinetics was used to analyze the biotransformation of R/S-ketamine into R/S-norketamine and, in a second step, into R/S-downstream metabolites. In liver and lung microsomes, levels of R-ketamine exceeded those of S-ketamine at all time points and S-norketamine exceeded R-norketamine at time points below the maximum concentration. In liver and lung microsomes, significant differences in the enzyme velocity (V(max)) were observed between S- and R-norketamine formation and between V(max) of S-norketamine formation when S-ketamine was compared to S-ketamine of the racemate. Our investigations in microsomal reactions in vitro suggest that stereoselective ketamine biotransformation in horses occurs in the liver and the lung with a slower elimination of S-ketamine in the presence of R-ketamine. Scaling of the in vitro parameters to liver and lung organ clearances provided an excellent fit with previously published in vivo data and confirmed a lung first-pass effect.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The self-assembly and redox-properties of two viologen derivatives, N-hexyl-N-(6-thiohexyl)-4,4-bipyridinium bromide (HS-6V6-H) and N,N-bis(6-thiohexyl)-4,4-bipyridinium bromide (HS-6V6-SH), immobilized on Au(111)-(1x1) macro-electrodes were investigated by cyclic voltammetry, surface enhanced infrared spectroscopy (SEIRAS) and in situ scanning tunneling microscopy (STM). Depending on the assembly conditions one could distinguish three different types of adlayers for both viologens: a low coverage disordered and an ordered striped phase of flat oriented molecules as well as a high coverage monolayer composed of tilted viologen moieties. Both molecules, HS-6V6-H and HS-6V6-SH, were successfully immobilized on Au(poly) nano-electrodes, which gave a well-defined redox-response in the lower pA–current range. An in situ STM configuration was employed to explore electron transport properties of single molecule junctions Au(T)|HS-6V6-SH(HS-6V6-H)|Au(S). The observed sigmoidal potential dependence, measured at variable substrate potential ES and at constant bias voltage (ET–ES), was attributed to electronic structure changes of the viologen moiety during the one-electron reduction/re-oxidation process V2+ V+. Tunneling experiments in asymmetric, STM-based junctions Au(T)-S-6V6-H|Au(S) revealed current (iT)–voltage (ET) curves with a maximum located at the equilibrium potential of the redox-process V2+ V+. The experimental iT–ET characteristics of the HS-6V6-H–modified tunneling junction were tentatively attributed to a sequential two-step electron transfer mechanism.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objectives: Etravirine (ETV) is metabolized by cytochrome P450 (CYP) 3A, 2C9, and 2C19. Metabolites are glucuronidated by uridine diphosphate glucuronosyltransferases (UGT). To identify the potential impact of genetic and non-genetic factors involved in ETV metabolism, we carried out a two-step pharmacogenetics-based population pharmacokinetic study in HIV-1 infected individuals. Materials and methods: The study population included 144 individuals contributing 289 ETV plasma concentrations and four individuals contributing 23 ETV plasma concentrations collected in a rich sampling design. Genetic variants [n=125 single-nucleotide polymorphisms (SNPs)] in 34 genes with a predicted role in ETV metabolism were selected. A first step population pharmacokinetic model included non-genetic and known genetic factors (seven SNPs in CYP2C, one SNP in CYP3A5) as covariates. Post-hoc individual ETV clearance (CL) was used in a second (discovery) step, in which the effect of the remaining 98 SNPs in CYP3A, P450 cytochrome oxidoreductase (POR), nuclear receptor genes, and UGTs was investigated. Results: A one-compartment model with zero-order absorption best characterized ETV pharmacokinetics. The average ETV CL was 41 (l/h) (CV 51.1%), the volume of distribution was 1325 l, and the mean absorption time was 1.2 h. The administration of darunavir/ritonavir or tenofovir was the only non-genetic covariate influencing ETV CL significantly, resulting in a 40% [95% confidence interval (CI): 13–69%] and a 42% (95% CI: 17–68%) increase in ETV CL, respectively. Carriers of rs4244285 (CYP2C19*2) had 23% (8–38%) lower ETV CL. Co-administered antiretroviral agents and genetic factors explained 16% of the variance in ETV concentrations. None of the SNPs in the discovery step influenced ETV CL. Conclusion: ETV concentrations are highly variable, and co-administered antiretroviral agents and genetic factors explained only a modest part of the interindividual variability in ETV elimination. Opposing effects of interacting drugs effectively abrogate genetic influences on ETV CL, and vice-versa.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

When tilted sideways participants misperceive the visual vertical assessed by means of a luminous line in otherwise complete dark- ness. A recent modeling approach (De Vrijer et al., 2009) claimed that these typical patterns of errors (known as A- and E-effects) could be explained by as- suming that participants behave in a Bayes optimal manner. In this study, we experimentally manipulate participants’ prior information about body-in-space orientation and measure the effect of this manipulation on the subjective visual vertical (SVV). Specifically, we explore the effects of veridical and misleading instructions about body tilt orientations on the SVV. We used a psychophys- ical 2AFC SVV task at roll tilt angles of 0 degrees, 16 degrees and 4 degrees CW and CCW. Participants were tilted to 4 degrees under different instruction conditions: in one condition, participants received veridical instructions as to their tilt angle, whereas in another condition, participants received the mis- leading instruction that their body position was perfectly upright. Our results indicate systematic differences between the instruction conditions at 4 degrees CW and CCW. Participants did not simply use an ego-centric reference frame in the misleading condition; instead, participants’ estimates of the SVV seem to lie between their head’s Z-axis and the estimate of the SVV as measured in the veridical condition. All participants displayed A-effects at roll tilt an- gles of 16 degrees CW and CCW. We discuss our results in the context of the Bayesian model by De Vrijer et al. (2009), and claim that this pattern of re- sults is consistent with a manipulation of precision of a prior distribution over body-in-space orientations. Furthermore, we introduce a Bayesian Generalized Linear Model for estimating parameters of participants’ psychometric function, which allows us to jointly estimate group level and individual level parameters under all experimental conditions simultaneously, rather than relying on the traditional two-step approach to obtaining group level parameter estimates.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The maintenance of genetic variation in a spatially heterogeneous environment has been one of the main research themes in theoretical population genetics. Despite considerable progress in understanding the consequences of spatially structured environments on genetic variation, many problems remain unsolved. One of them concerns the relationship between the number of demes, the degree of dominance, and the maximum number of alleles that can be maintained by selection in a subdivided population. In this work, we study the potential of maintaining genetic variation in a two-deme model with deme-independent degree of intermediate dominance, which includes absence of G x E interaction as a special case. We present a thorough numerical analysis of a two-deme three-allele model, which allows us to identify dominance and selection patterns that harbor the potential for stable triallelic equilibria. The information gained by this approach is then used to construct an example in which existence and asymptotic stability of a fully polymorphic equilibrium can be proved analytically. Noteworthy, in this example the parameter range in which three alleles can coexist is maximized for intermediate migration rates. Our results can be interpreted in a specialist-generalist context and (among others) show when two specialists can coexist with a generalist in two demes if the degree of dominance is deme independent and intermediate. The dominance relation between the generalist allele and the specialist alleles play a decisive role. We also discuss linear selection on a quantitative trait and show that G x E interaction is not necessary for the maintenance of more than two alleles in two demes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objectives: To investigate surface roughness and microhardness of two recent resin-ceramic materials for computer-aided design/computer-aided manufacturing (CAD/CAM) after polishing with three polishing systems. Surface roughness and microhardness were measured immediately after polishing and after six months storage including monthly artificial toothbrushing. Methods: Sixty specimens of Lava Ultimate (3M ESPE) and 60 specimens of VITA ENAMIC (VITA Zahnfabrik) were roughened in a standardized manner and polished with one of three polishing systems (n=20/group): Sof-Lex XT discs (SOFLEX; three-step (medium-superfine); 3M ESPE), VITA Polishing Set Clinical (VITA; two-step; VITA Zahnfabrik), or KENDA Unicus (KENDA; one-step; KENDA Dental). Surface roughness (Ra; μm) was measured with a profilometer and microhardness (Vickers; VHN) with a surface hardness indentation device. Ra and VHN were measured immediately after polishing and after six months storage (tap water, 37°C) including monthly artificial toothbrushing (500 cycles/month, toothpaste RDA ~70). Ra- and VHN-values were analysed with nonparametric ANOVA followed by Wilcoxon rank sum tests (α=0.05). Results: For Lava Ultimate, Ra (mean [standard deviation] before/after storage) remained the same when polished with SOFLEX (0.18 [0.09]/0.19 [0.10]; p=0.18), increased significantly with VITA (1.10 [0.44]/1.27 [0.39]; p=0.0001), and decreased significantly with KENDA (0.35 [0.07]/0.33 [0.08]; p=0.03). VHN (mean [standard deviation] before/after storage) decreased significantly regardless of polishing system (SOFLEX: 134.1 [5.6]/116.4 [3.6], VITA: 138.2 [10.5]/115.4 [5.9], KENDA: 135.1 [6.2]/116.7 [6.3]; all p<0.0001). For VITA ENAMIC, Ra (mean [standard deviation] before/after storage) increased significantly when polished with SOFLEX (0.37 [0.18]/0.41 [0.14]; p=0.01) and remained the same with VITA (1.32 [0.37]/1.31 [0.40]; p=0.58) and with KENDA (0.81 [0.35]/0.78 [0.32]; p=0.21). VHN (mean [standard deviation] before/after storage) remained the same regardless of polishing system (SOFLEX: 284.9 [24.6]/282.4 [31.8], VITA: 284.6 [28.5]/276.4 [25.8], KENDA: 292.6 [26.9]/282.9 [24.3]; p=0.42-1.00). Conclusion: Surface roughness and microhardness of Lava Ultimate was more affected by storage and artificial toothbrushing than was VITA ENAMIC.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction: Cervical vertebral (C) malformation is rarely reported in large breed dogs. Congenital cervical kyphosis (CCK) may result from defects of vertebral segmentation, failure of formation or both. This report describes two cases of C3-C4 CCK in young sighthounds, treated surgically. Case description: An 18-month-old female Deerhound and a six-week-old female Borzoi dog were presented because of the complaints of reluctance to exercise and signs of of neck pain. Both dogs were neurologically normal. Diagnostic imaging revealed C3-C4 deformity, moderate kyphosis, and spinal canal stenosis associated with chronic spinal cord pressure atrophy. Both dogs underwent surgical treatment. Results: A staged two-step surgery starting with dorsal decompression was elected in the Deerhound. After the first surgical procedure, the dog developed focal myelomalacia and phrenic nerve paralysis and was euthanatized. A ventral distraction-fusion technique with two locking plates was performed in the Borzoi. This patient recovered uneventfully and long-term follow-up computed tomography revealed complete spondylodesis. Clinical significance: Until now, CCK has only been described in sighthounds. Congenital cervical kyphosis might be considered a differential diagnosis in these breeds that are presented with signs of cervical pain. Ventral realignment-fusion and bone grafting may be considered for surgical treatment, although the earliest age at which this procedure can and should be performed remains unclear.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The objective of this research has been to study the molecular basis for chromosome aberration formation. Predicated on a variety of data, Mitomycin C (MMC)-induced DNA damage has been postulated to cause the formation of chromatid breaks (and gaps) by preventing the replication of regions of the genome prior to mitosis. The basic protocol for these experiments involved treating synchronized Hela cells in G(,1)-phase with a 1 (mu)g/ml dose of MMC for one hour. After removing the drug, cells were then allowed to progress to mitosis and were harvested for analysis by selective detachment. Utilizing the alkaline elution assay for DNA damage, evidence was obtained to support the conclusion that Hela cells can progress through S-phase into mitosis with intact DNA-DNA interstrand crosslinks. A higher level of crosslinking was observed in those cells remaining in interphase compared to those able to reach mitosis at the time of analysis. Dual radioisotope labeling experiments revealed that, at this dose, these crosslinks were associated to the same extent with both parental and newly replicated DNA. This finding was shown not to be the result of a two-step crosslink formation mechanism in which crosslink levels increase with time after drug treatment. It was also shown not to be an artefact of the double-labeling protocol. Using neutral CsCl density gradient ultracentrifugation of mitotic cells containing BrdU-labeled newly replicated DNA, control cells exhibited one major peak at a heavy/light density. However, MMC-treated cells had this same major peak at the heavy/light density, in addition to another minor peak at a density characteristic for light/light DNA. This was interpreted as indicating either: (1) that some parental DNA had not been replicated in the MMC treated sample or; (2) that a recombination repair mechanism was operational. To distinguish between these two possibilities, flow cytometric DNA fluorescence (i.e., DNA content) measurements of MMC-treated and control cells were made. These studies revealed that the mitotic cells that had been treated with MMC while in G(,1)-phase displayed a 10-20% lower DNA content than untreated control cells when measured under conditions that neutralize chromosome condensation effects (i.e., hypotonic treatment). These measurements were made under conditions in which the binding of the drug, MMC, was shown not to interfere with the stoichiometry of the ethidium bromide-mithramycin stain. At the chromosome level, differential staining techniques were used in an attempt to visualize unreplicated regions of the genome, but staining indicative of large unreplicated regions was not observed. These results are best explained by a recombinogenic mechanism. A model consistent with these results has been proposed.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective. To measure the demand for primary care and its associated factors by building and estimating a demand model of primary care in urban settings.^ Data source. Secondary data from 2005 California Health Interview Survey (CHIS 2005), a population-based random-digit dial telephone survey, conducted by the UCLA Center for Health Policy Research in collaboration with the California Department of Health Services, and the Public Health Institute between July 2005 and April 2006.^ Study design. A literature review was done to specify the demand model by identifying relevant predictors and indicators. CHIS 2005 data was utilized for demand estimation.^ Analytical methods. The probit regression was used to estimate the use/non-use equation and the negative binomial regression was applied to the utilization equation with the non-negative integer dependent variable.^ Results. The model included two equations in which the use/non-use equation explained the probability of making a doctor visit in the past twelve months, and the utilization equation estimated the demand for primary conditional on at least one visit. Among independent variables, wage rate and income did not affect the primary care demand whereas age had a negative effect on demand. People with college and graduate educational level were associated with 1.03 (p < 0.05) and 1.58 (p < 0.01) more visits, respectively, compared to those with no formal education. Insurance was significantly and positively related to the demand for primary care (p < 0.01). Need for care variables exhibited positive effects on demand (p < 0.01). Existence of chronic disease was associated with 0.63 more visits, disability status was associated with 1.05 more visits, and people with poor health status had 4.24 more visits than those with excellent health status. ^ Conclusions. The average probability of visiting doctors in the past twelve months was 85% and the average number of visits was 3.45. The study emphasized the importance of need variables in explaining healthcare utilization, as well as the impact of insurance, employment and education on demand. The two-equation model of decision-making, and the probit and negative binomial regression methods, was a useful approach to demand estimation for primary care in urban settings.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Preventable Hospitalizations (PHs) are hospitalizations that can be avoided with appropriate and timely care in the ambulatory setting and hence are closely associated with primary care access in a community. Increased primary care availability and health insurance coverage may increase primary care access, and consequently may be significantly associated with risks and costs of PHs. Objective. To estimate the risk and cost of preventable hospitalizations (PHs); to determine the association of primary care availability and health insurance coverage with the risk and costs of PHs, first alone and then simultaneously; and finally, to estimate the impact of expansions in primary care availability and health insurance coverage on the burden of PHs among non-elderly adult residents of Harris County. Methods. The study population was residents of Harris County, age 18 to 64, who had at least one hospital discharge in a Texas hospital in 2008. The primary independent variables were availability of primary care physicians, availability of primary care safety net clinics and health insurance coverage. The primary dependent variables were PHs and associated hospitalization costs. The Texas Health Care Information Collection (THCIC) Inpatient Discharge data was used to obtain information on the number and costs of PHs in the study population. Risk of PHs in the study population, as well as average and total costs of PHs were calculated. Multivariable logistic regression models and two-step Heckman regression models with log-transformed costs were used to determine the association of primary care availability and health insurance coverage with the risk and costs of PHs respectively, while controlling for individual predisposing, enabling and need characteristics. Predicted PH risk and cost were used to calculate the predicted burden of PHs in the study population and the impact of expansions in primary care availability and health insurance coverage on the predicted burden. Results. In 2008, hospitalized non-elderly adults in Harris County had 11,313 PHs and a corresponding PH risk of 8.02%. Congestive heart failure was the most common PH. PHs imposed a total economic burden of $84 billion at an average of $7,449 per PH. Higher primary care safety net availability was significantly associated with the lower risk of PHs in the final risk model, but only in the uninsured. A unit increase in safety net availability led to a 23% decline in PH odds in the uninsured, compared to only a 4% decline in the insured. Higher primary care physician availability was associated with increased PH costs in the final cost model (β=0.0020; p<0.05). Lack of health insurance coverage increased the risk of PH, with the uninsured having 30% higher odds of PHs (OR=1.299; p<0.05), but reduced the cost of a PH by 7% (β=-0.0668; p<0.05). Expansions in primary care availability and health insurance coverage were associated with a reduction of about $1.6 million in PH burden at the highest level of expansion. Conclusions. Availability of primary care resources and health insurance coverage in hospitalized non-elderly adults in Harris County are significantly associated with the risk and costs of PHs. Expansions in these primary care access factors can be expected to produce significant reductions in the burden of PHs in Harris County.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The development of targeted therapy involve many challenges. Our study will address some of the key issues involved in biomarker identification and clinical trial design. In our study, we propose two biomarker selection methods, and then apply them in two different clinical trial designs for targeted therapy development. In particular, we propose a Bayesian two-step lasso procedure for biomarker selection in the proportional hazards model in Chapter 2. In the first step of this strategy, we use the Bayesian group lasso to identify the important marker groups, wherein each group contains the main effect of a single marker and its interactions with treatments. In the second step, we zoom in to select each individual marker and the interactions between markers and treatments in order to identify prognostic or predictive markers using the Bayesian adaptive lasso. In Chapter 3, we propose a Bayesian two-stage adaptive design for targeted therapy development while implementing the variable selection method given in Chapter 2. In Chapter 4, we proposed an alternate frequentist adaptive randomization strategy for situations where a large number of biomarkers need to be incorporated in the study design. We also propose a new adaptive randomization rule, which takes into account the variations associated with the point estimates of survival times. In all of our designs, we seek to identify the key markers that are either prognostic or predictive with respect to treatment. We are going to use extensive simulation to evaluate the operating characteristics of our methods.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Benthic foraminiferal stable isotope records for the past 11 Myr from a recently drilled site in the sub-Antarctic South Atlantic (Site 1088, Ocean Drilling Program Leg 177, 41°S, 15°E, 2082 m water depth) provide, for the first time, a continuous long-term perspective on deep water distribution patterns and Southern Ocean climate change from the late Miocene through the early Pliocene. I have compiled published late Miocene through Pliocene stable isotope records to place the new South Atlantic record in a global framework. Carbon isotope gradients between the North Atlantic, South Atlantic, and Pacific indicate that a nutrient-depleted watermass, probably of North Atlantic origin, reached the sub-Antarctic South Atlantic after 6.6 Ma. By 6.0 Ma the relative proportion of the northern-provenance watermass was similar to today and by the early Pliocene it had increased to greater than the modern proportion suggesting that thermohaline overturn in the Atlantic was relatively strong prior to the early Pliocene interval of inferred climatic warmth. Site 1088 oxygen isotope values display a two-step increase between ~7.4 Ma and 6.9 Ma, a trend that parallels a published delta18O record of a site on the Atlantic coast of Morocco. This is perhaps best explained by a gradual cooling of watermasses that were sinking in the Southern Ocean. I speculate that relatively strong thermohaline overturn at rates comparable to the present day interglacial interval during the latest Miocene may have provided the initial conditions for early Pliocene climatic warmth. The impact of an emerging Central American Seaway on Atlantic-Pacific Ocean upper water exchange may have been felt in the North Atlantic beginning in the latest Miocene between 6.6 and 6.0 Ma, which would be ~1.5 Myr earlier than previously thought.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The climate during the Cenozoic era changed in several steps from ice-free poles and warm conditions to ice-covered poles and cold conditions. Since the 1950s, a body of information on ice volume and temperature changes has been built up predominantly on the basis of measurements of the oxygen isotopic composition of shells of benthic foraminifera collected from marine sediment cores. The statistical methodology of time series analysis has also evolved, allowing more information to be extracted from these records. Here we provide a comprehensive view of Cenozoic climate evolution by means of a coherent and systematic application of time series analytical tools to each record from a compilation spanning the interval from 4 to 61 Myr ago. We quantitatively describe several prominent features of the oxygen isotope record, taking into account the various sources of uncertainty (including measurement, proxy noise, and dating errors). The estimated transition times and amplitudes allow us to assess causal climatological-tectonic influences on the following known features of the Cenozoic oxygen isotopic record: Paleocene-Eocene Thermal Maximum, Eocene-Oligocene Transition, Oligocene-Miocene Boundary, and the Middle Miocene Climate Optimum. We further describe and causally interpret the following features: Paleocene-Eocene warming trend, the two-step, long-term Eocene cooling, and the changes within the most recent interval (Miocene-Pliocene). We review the scope and methods of constructing Cenozoic stacks of benthic oxygen isotope records and present two new latitudinal stacks, which capture besides global ice volume also bottom water temperatures at low (less than 30°) and high latitudes. This review concludes with an identification of future directions for data collection, statistical method development, and climate modeling.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This talk illustrates how results from various Stata commands can be processed efficiently for inclusion in customized reports. A two-step procedure is proposed in which results are gathered and archived in the first step and then tabulated in the second step. Such an approach disentangles the tasks of computing results (which may take long) and preparing results for inclusion in presentations, papers, and reports (which you may have to do over and over). Examples using results from model estimation commands and various other Stata commands such as tabulate, summarize, or correlate are presented. Users will also be shown how to dynamically link results into word processors or into LaTeX documents.