269 resultados para ancillary


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose. To evaluate the use of the Legionella Urine Antigen Test as a cost effective method for diagnosing Legionnaires’ disease in five San Antonio Hospitals from January 2007 to December 2009. ^ Methods. The data reported by five San Antonio hospitals to the San Antonio Metropolitan Health District during a 3-year retrospective study (January 2007 to December 2009) were evaluated for the frequency of non-specific pneumonia infections, the number of Legionella Urine Antigen Tests performed, and the percentage of positive cases of Legionnaires’ disease diagnosed by the Legionella Urine Antigen Test.^ Results. There were a total of 7,087 cases of non-specific pneumonias reported across the five San Antonio hospitals studied from 2007 to 2009. A total of 5,371 Legionella Urine Antigen Tests were performed from January, 2007 to December, 2009 across the five San Antonio hospitals in the study. A total of 38 positive cases of Legionnaires’ disease were identified by the use of Legionella Urinary Antigen Test from 2007-2009.^ Conclusions. In spite of the limitations of this study in obtaining sufficient relevant data to evaluate the cost effectiveness of Legionella Urinary Antigen Test in diagnosing Legionnaires’ disease, the Legionella Urinary Antigen Test is simple, accurate, faster, as results can be obtained within minutes to hours; and convenient because it can be performed in emergency room department to any patient who presents with the clinical signs or symptoms of pneumonia. Over the long run, it remains to be shown if this test may decrease mortality, lower total medical costs by decreasing the number of broad-spectrum antibiotics prescribed, shorten patient wait time/hospital stay, and decrease the need for unnecessary ancillary testing, and improve overall patient outcomes.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multiple studies have shown an association between periodontitis and coronary heart disease due to the chronic inflammatory nature of periodontitis. Also, studies have indicated similar risk factors and patho-physiologic mechanisms for periodontitis and CHD. Among these factors, smoking has been the most discussed common risk factor and some studies suggested the periodontitis - CHD association to be largely a result of confounding due to smoking or inadequate adjustment for it. We conducted a secondary data analysis of the Dental ARIC Study, an ancillary study to the ARIC Study, to evaluate the effect of smoking on the periodontitis - CHD association using three periodontitis classifications namely, BGI, AAP-CDC, and Dental-ARIC classification (Beck et al 2001). We also compared these results with edentulous ARIC participants. Using Cox proportional hazard models, we found that the individuals with the most severe form of periodontitis in each of the three classifications (BGI: HR = 1.56, 95%CI: 1.15 – 2.13; AAP-CDC: HR = 1.42, 95%CI: 1.13 – 1.79; and Dental-ARIC: HR = 1.49, 95%CI: 1.22 – 1.83) were at a significantly higher risk of incident CHD in the unadjusted models; whereas only BGI-P3 showed statistically significant increased risk in the smoking adjusted models (HR = 1.43, 95%CI: 1.04 – 1.96). However none of the categories in any of the classifications showed significant association when a list of traditional CHD risk factors was introduced into the models. On the other hand, edentulous participants showed significant results when compared to the dentate ARIC participants in the crude (HR = 1.56, 95%CI: 1.34 – 1.82); smoking adjusted (HR = 1.39, 95%CI: 1.18 – 1.64) age, race and sex adjusted (HR = 1.52, 95%CI: 1.30 – 1.77); and ARIC traditional risk factors (except smoking) adjusted (HR = 1.27, 95%CI: 1.02 – 1.57) models. Also, the risk remained significantly higher even when smoking was introduced in the age, sex and race adjusted model (HR = 1.38, 95%CI: 1.17 – 1.63). Smoking did not reduce the hazard ratio by more than 8% when it was included in any of the Cox models. ^ This is the first study to include the three most recent case definitions of periodontitis simultaneously while looking at its association with incident coronary heart disease. We found smoking to be partially confounding the periodontitis and coronary heart disease association and edentulism to be significantly associated with incident CHD even after adjusting for smoking and the ARIC traditional risk factors. The difference in the three periodontitis classifications was not found to be statistical significant when they were tested for equality of the area under their ROC curves but this should not be confused with their clinical significance.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main aim of this study was to look at the association of Clostridium difficile infection (CDI) and HIV. A secondary goal was to look at the trend of CDI-related deaths in Texas from 1999-2011. To evaluate the coinfection of CDI and HIV, we looked at 2 datasets provided by CHS-TDSHS, for 13 years of study period from 1999-2011: 1) Texas death certificate data and 2) Texas hospital discharge data. An ancillary source of data was national level death data from CDC. We did a secondary data analysis and reported the age-adjusted death rates (mortality) and hospital discharge frequencies (morbidity) for CDI, HIV and for CDI+HIV coinfection.^ Since the turn of the century, CDI has reemerged as an important public health challenge due to the emergence of hypervirulent epidemic strains. From 1999-2011, there has been a significant upward trend in CDI-related death rates; in the state of Texas alone, CDI mortality rate has increased 8.7 fold in this time period at the rate of 0.2 deaths per year per 100,000 individuals. On the contrary, mortality due to HIV has decreased by 46% and has been trending down. The demographic groups in Texas with the highest CDI mortality rates were elderly aged 65+, males, whites and hospital inpatients. The epidemiology of C. difficile has changed in such a way that it is not only staying confined to these traditional high-risk groups, but is also being increasingly reported in low-risk populations such as healthy people in the community (community acquired C. difficile), and most recently immunocompromised patients. Among the latter, HIV can worsen the adverse health outcomes of CDI and vice versa. In patients with CDI and HIV coinfection, higher mortality and morbidity was found in young & middle-aged adults, blacks and males, the same demographic population that is at higher risk for HIV. As with typical CDI, the coinfection was concentrated in the hospital inpatients. Of all the CDI-related deaths in USA from 1999-2010, in the 25-44 year age group, 13% had HIV infection. Of all CDI-related inpatient hospital discharges in Texas from 1999-2011, in patients 44 years and younger, 17% had concomitant HIV infection. Therefore, HIV is a possible novel emerging risk factor for CDI.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This investigation compares two different methodologies for calculating the national cost of epilepsy: provider-based survey method (PBSM) and the patient-based medical charts and billing method (PBMC&BM). The PBSM uses the National Hospital Discharge Survey (NHDS), the National Hospital Ambulatory Medical Care Survey (NHAMCS) and the National Ambulatory Medical Care Survey (NAMCS) as the sources of utilization. The PBMC&BM uses patient data, charts and billings, to determine utilization rates for specific components of hospital, physician and drug prescriptions. ^ The 1995 hospital and physician cost of epilepsy is estimated to be $722 million using the PBSM and $1,058 million using the PBMC&BM. The difference of $336 million results from $136 million difference in utilization and $200 million difference in unit cost. ^ Utilization. The utilization difference of $136 million is composed of an inpatient variation of $129 million, $100 million hospital and $29 million physician, and an ambulatory variation of $7 million. The $100 million hospital variance is attributed to inclusion of febrile seizures in the PBSM, $−79 million, and the exclusion of admissions attributed to epilepsy, $179 million. The former suggests that the diagnostic codes used in the NHDS may not properly match the current definition of epilepsy as used in the PBMC&BM. The latter suggests NHDS errors in the attribution of an admission to the principal diagnosis. ^ The $29 million variance in inpatient physician utilization is the result of different per-day-of-care physician visit rates, 1.3 for the PBMC&BM versus 1.0 for the PBSM. The absence of visit frequency measures in the NHDS affects the internal validity of the PBSM estimate and requires the investigator to make conservative assumptions. ^ The remaining ambulatory resource utilization variance is $7 million. Of this amount, $22 million is the result of an underestimate of ancillaries in the NHAMCS and NAMCS extrapolations using the patient visit weight. ^ Unit cost. The resource cost variation is $200 million, inpatient is $22 million and ambulatory is $178 million. The inpatient variation of $22 million is composed of $19 million in hospital per day rates, due to a higher cost per day in the PBMC&BM, and $3 million in physician visit rates, due to a higher cost per visit in the PBMC&BM. ^ The ambulatory cost variance is $178 million, composed of higher per-physician-visit costs of $97 million and higher per-ancillary costs of $81 million. Both are attributed to the PBMC&BM's precise identification of resource utilization that permits accurate valuation. ^ Conclusion. Both methods have specific limitations. The PBSM strengths are its sample designs that lead to nationally representative estimates and permit statistical point and confidence interval estimation for the nation for certain variables under investigation. However, the findings of this investigation suggest the internal validity of the estimates derived is questionable and important additional information required to precisely estimate the cost of an illness is absent. ^ The PBMC&BM is a superior method in identifying resources utilized in the physician encounter with the patient permitting more accurate valuation. However, the PBMC&BM does not have the statistical reliability of the PBSM; it relies on synthesized national prevalence estimates to extrapolate a national cost estimate. While precision is important, the ability to generalize to the nation may be limited due to the small number of patients that are followed. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This data was collected during a cruise across Drake Passage in the Southern Ocean in February 2009. This data consists of coccolithophore abundance, calcification and primary production rates, carbonate chemistry parameters and ancillary data of macronutrients, chlorophyll-a, average mixed layer irradiance, daily irradiance above the sea surface, euphotic and mixed layer depth, temperature and salinity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Wadden Sea is located in the southeastern part of the North Sea forming an extended intertidal area along the Dutch, German and Danish coast. It is a highly dynamic and largely natural ecosystem influenced by climatic changes and anthropogenic use of the North Sea. Changes in the environment of the Wadden Sea, natural or anthropogenic origin, cannot be monitored by the standard measurement methods alone, because large-area surveys of the intertidal flats are often difficult due to tides, tidal channels and unstable underground. For this reason, remote sensing offers effective monitoring tools. In this study a multi-sensor concept for classification of intertidal areas in the Wadden Sea has been developed. Basis for this method is a combined analysis of RapidEye (RE) and TerraSAR-X (TSX) satellite data coupled with ancillary vector data about the distribution of vegetation, mussel beds and sediments. The classification of the vegetation and mussel beds is based on a decision tree and a set of hierarchically structured algorithms which use object and texture features. The sediments are classified by an algorithm which uses thresholds and a majority filter. Further improvements focus on radiometric enhancement and atmospheric correction. First results show that we are able to identify vegetation and mussel beds with the use of multi-sensor remote sensing. The classification of the sediments in the tidal flats is a challenge compared to vegetation and mussel beds. The results demonstrate that the sediments cannot be classified with high accuracy by their spectral properties alone due to their similarity which is predominately caused by their water content.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Surface elevation maps of the southern half of the Greenland subcontinent are produced from radar altimeter data acquired by the Seasat satellite. A summary of the processing procedure and examples of return waveform data are given. The elevation data are used to generate a regular grid which is then computer contoured to provide an elevation contour map. Ancillary maps show the statistical quality of the elevation data and various characteristics of the surface. The elevation map is used to define ice flow directions and delineate the major drainage basins. Regular maps of the Jakobshavns Glacier drainage basin and the ice divide in the vicinity of Crete Station are presented. Altimeter derived elevations are compared with elevations measured both by satellite geoceivers and optical surveying.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Microzooplankton (the 20 to 200 µm size class of zooplankton) is recognised as an important part of marine pelagic ecosystems. In terms of biomass and abundance heterotrophic dinoflagellates are one of the important groups of organism in microzooplankton. However, their rates - grazing and growth - , feeding behaviour and prey preferences are poorly known and understood. A set of data was assembled in order to derive a better understanding of heterotrophic dinoflagellates rates, in response to parameters such as prey concentration, prey type (size and species), temperature and their own size. With these objectives, literature was searched for laboratory experiments with information on one or more of these parameters effect studied. The criteria for selection and inclusion in the database included: (i) controlled laboratory experiment with a known dinoflagellate feeding on a known prey; (ii) presence of ancillary information about experimental conditions, used organisms - cell volume, cell dimensions, and carbon content. Rates and ancillary information were measured in units that meet the experimenter need, creating a need to harmonize the data units after collection. In addition different units can link to different mechanisms (carbon to nutritive quality of the prey, volume to size limits). As a result, grazing rates are thus available as pg C dinoflagellate-1 h-1, µm3 dinoflagellate-1 h-1 and prey cell dinoflagellate-1 h-1; clearance rate was calculated if not given and growth rate is expressed as the growth rate per day.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Records of the past neodymium (Nd) isotope composition of the deep ocean can resolve ambiguities in the interpretation of other tracers. We present the first Nd isotope data for sedimentary benthic foraminifera. Comparison of the epsilon-Nd of core-top foraminifera from a depth transect on the Cape Basin side of the Walvis Ridge to published seawater data, and to the modern dissolved SiO2- epsilon-Nd trend of the deep Atlantic, suggests that benthic foraminifera represent a reliable archive of the deep water Nd isotope composition. Neodymium isotope values of benthic foraminifera from ODP Site 1264A (Angola Basin side of the Walvis Ridge) from the last 8 Ma agree with Fe-Mn oxide coatings from the same samples and are also broadly consistent with existing fish teeth data for the deep South Atlantic, yielding confidence in the preservation of the marine Nd isotope signal in all these archives. The marine origin of the Nd in the coatings is confirmed by their marine Sr isotope values. These important results allow application of the technique to down-core samples. The new epsilon-Nd datasets, along with ancillary Cd/Ca and Nd/Ca ratios from the same foraminiferal samples, are interpreted in the context of debates on the Neogene history of North Atlantic Deep Water (NADW) export to the South Atlantic. In general, the epsilon-Nd and delta13C records are closely correlated over the past 4.5 Ma. The Nd isotope data suggest strong NADW export from 8 to 5 Ma, consistent with one interpretation of published delta13C gradients. Where the epsilon-Nd record differs from the nutrient-based records, changes in the pre-formed delta13C or Cd/Ca of southern-derived deep water might account for the difference. Maximum NADW-export for the entire record is suggested by all proxies at 3.5-4 Ma. Chemical conditions from 3 to 1 Ma are totally different, showing, on average, the lowest NADW export of the record. Modern-day values again imply NADW export that is about as strong as at any stage over the past 8 Ma.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Parameters in the photosynthesis-irradiance (P-E) relationship of phytoplankton were measured at weekly to bi-weekly intervals for 20 yr at 6 stations on the Rhode River, Maryland (USA). Variability in the light-saturated photosynthetic rate, PBmax, was partitioned into interannual, seasonal, and spatial components. The seasonal component of the variance was greatest, followed by interannual and then spatial. Physiological models of PBmax based on balanced growth or photoacclimation predicted the overall mean and most of the range, but not individual observations, and failed to capture important features of the seasonal and interannual variability. PBmax correlated most strongly with temperature and the concentration of dissolved inorganic carbon (IC), with lesser correlations with chlorophyll a, diffuse attenuation coefficient, and a principal component of the species composition. In statistical models, temperature and IC correlated best with the seasonal pattern, but temperature peaked in late July, out of phase with PBmax, which peaked in September, coincident with the maximum in monthly averaged IC concentration. In contrast with the seasonal pattern, temperature did not contribute to interannual variation, which instead was governed by IC and the additional lesser correlates. Spatial variation was relatively weak and uncorrelated with ancillary measurements. The results demonstrate that both the overall distribution of PBmax and its relationship with environmental correlates may vary from year to year. Coefficients in empirical statistical models became stable after including 7 to 10 yr of data. The main correlates of PBmax are amenable to automated monitoring, so that future estimates of primary production might be made without labor-intensive incubations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Microzooplankton (the 20 to 200 µm size class of zooplankton) is recognised as an important part of marine pelagic ecosystems. In terms of biomass and abundance pelagic ciliates are one of the important groups of organism in microzooplankton. However, their rates - grazing and growth - , feeding behaviour and prey preferences are poorly known and understood. A set of data was assembled in order to derive a better understanding of pelagic ciliates rates, in response to parameters such as prey concentration, prey type (size and species), temperature and their own size. With these objectives, literature was searched for laboratory experiments with information on one or more of these parameters effect studied. The criteria for selection and inclusion in the database included: (i) controlled laboratory experiment with a known ciliates feeding on a known prey; (ii) presence of ancillary information about experimental conditions, used organisms - cell volume, cell dimensions, and carbon content. Rates and ancillary information were measured in units that meet the experimenter need, creating a need to harmonize the data units after collection. In addition different units can link to different mechanisms (carbon to nutritive quality of the prey, volume to size limits). As a result, grazing rates are thus available as pg C/(ciliate*h), µm**3/(ciliate*h) and prey cell/(ciliate*h); clearance rate was calculated if not given and growth rate is expressed as the growth rate per day.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Los polímeros armados con fibras (FRP) se utilizan en refuerzos de estructuras de hormigón debido sobre todo a sus excelentes propiedades mecánicas, su resistencia a la corrosión y a su ligereza que se traduce en facilidad y ahorro en el transporte, puesta en obra y aplicación, la cual se realiza de forma muy rápida, con pocos operarios y utilizando medios auxiliares ligeros, minimizándose las interrupciones del uso de la estructura y las molestias a los usuarios. Las razones presentadas anteriormente, han despertado un gran inter´es por parte de diferentes grupos de investigación a nivel mundial y que actualmente se encuentran desarrollando nuevas técnicas de aplicación y métodos de cálculo. Sin embargo, las investigaciones realizadas hasta la fecha, muestran un procedimiento bien definido y aceptado en lo referente al cálculo a flexión, lo cual no ocurre con el refuerzo a cortante y aunque se ha demostrado que el refuerzo con FRP es un sistema eficaz para incrementar la capacidad ´ultima frente a esfuerzos cortantes, también se pone de manifiesto la necesidad de más estudios experimentales y teóricos para avanzar en el entendimiento de los mecanismos involucrados para este tipo de refuerzo y establecer un procedimiento de diseño apropiado que maximice las excelentes propiedades de este material. Los modelos que explican el comportamiento del refuerzo a cortante de elementos de hormigón armado son complejos y sin transposición directa a fórmulas ingenieriles. Las normas actualmente en vigor, generalmente, establecen empíricamente la capacidad cortante como la suma de las capacidades del hormigón y el refuerzo transversal de acero. Cuando un elemento es reforzado externamente con FRP, los modelos son evidentemente aun más complejos. Las guías y recomendaciones existentes proponen calcular la capacidad del elemento añadiendo la resistencia aportada por el refuerzo externo de FRP a la ya dada por el hormigón y acero transversal. Sin embargo, la idoneidad de este acercamiento es cuestionable puesto que no tiene en cuenta una posible interacción entre refuerzos. Con base en lo anterior se da origen al tema objeto de este trabajo, el cual está orientado al estudio a cortante de elementos de hormigón armado (HA), reforzados externamente con material compuesto de tejido unidireccional de fibra de carbono y resina epoxi. Inicialmente se hace una completa revisión del estado actual del conocimiento de la resistencia a cortante en elementos de hormigón armado con y sin refuerzo externo de FRP, prestando especial atención en los mecanismos actuantes estudiados hasta la fecha. La bibliografía consultada ha sido exhaustiva y actualizada lo que ha permitido el estudio de los modelos propuestos más importantes, tanto para la descripción del fenómeno de adherencia entre hormigón-FRP como de la valoración del aporte al cortante total hecho por el FRP, a través de sendas bases de datos de ensayos de pull-out y de vigas de hormigón armado ensayadas a cortante. Con base en todo lo anterior, se expusieron los mecanismos actuantes en el aporte a cortante hecho por el FRP en elementos de hormigón armado y la forma como las principales guías de cálculo existentes hasta la fecha los abordan. De igual forma se define un modelo de resistencia de esfuerzos para el FRP y se proponen dos modelos para el cálculo de las tensiones o deformaciones efectivas, de los cuales uno esta basado en el modelo de adherencia propuesto por Oller (2005) y el otro en una regresión multivariante para los mecanismos expuestos. Como complemento del estudio de los trabajos encontrados en la literatura, se lleva acabo un programa experimental que, además de aportar más registros a la exigua base de datos existentes, aporte mayor luz a los puntos que se consideran están deficientemente resueltos. Dentro de este programa se realizaron 32 ensayos sobre 16 vigas de 4.5 m de longitud (dos ensayos por viga), reforzadas a cortante con tejido unidireccional de CFRP. Finalmente, estos estudios han permitido proponer modificaciones a las formulaciones existentes en los códigos y guías en vigor. Abstract Its excellent mechanical properties, as well as its corrosion resistance and light weight, which make it easy to apply and inexpensive to ship to the worksite, are the basis of the extended use of fiber reinforced polymer (FRP) as external strengthening for structures. FRP strengthening is a rapid operation calling for only limited labor and lightweight ancillary equipment, all of which minimizes both the interruption of facility usage and user inconvenience. These advantages have aroused considerable interest in civil engineering science and technology and have led to countless applications the world over. Research studies on the shear strength of FRP-strengthened members have been much fewer in number and more controversial than the research on flexural strengthening, for which a more or less standardized and generally accepted procedure has been established. The research conducted and a host of applications around the world have shown that FRP strengthening is an effective technique for raising ultimate shear strength, but it has also revealed a need for further experimental and theoretical research to advance in the understanding of the mechanisms involved and establish suitable design procedures that optimize the excellent properties of this material The models that explain reinforced concrete (RC) shear strength behavior are complex and cannot be directly transposed to engineering formulas. The standards presently in place generally establish shear capacity empirically as the sum of the capacities of the concrete and the passive reinforcement. When members are externally strengthened with FRP, the models are obviously even more complex. The existing guides and recommendations propose calculating capacity by adding the external strength provided by the FRP to the contributions of the concrete and passive reinforcement. The suitability of this approach is questionable, however, because it fails to consider the interaction between passive reinforcement and external strengthening. The subject of this work is based in above, which is focused on externally shear strengthening for reinforced concrete members with unidirectional carbon fiber sheets bonded with epoxy resin. v Initially a thorough literature review on shear of reinforced concrete beams with and without external FRP strengthening was performed, paying special attention to the acting mechanisms studied to date, which allowed the study of the most important models both to describe the bond phenomenon as well as calculating the FRP shear contribution, through separate databases of pull-out tests and shear tests on reinforced concrete beams externally strengthened with FRP. Based on above, they were exposed the acting mechanisms in a FRP shear strengthening on reinforced concrete beams and how guidelines deal the topic. The same way, it is defined a FRP stress strength model and two more models are proposed for calculating the effective stress, one of these is based on the Oller (2005) bond model and another one is the data best fit, taking into account most of the acting mechanisms. To complement the theoretical part we develop an experimental program that, in addition to providing more records to the meager existing database provide greater understanding to the points considered poorly resolved. The test program included 32 tests of 16 beams (2 per beam) of 4.5 m long, shear strengthened with FRP, externally. Finally, modifications to the existing codes and guidelines are proposed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Competitive abstract machines for Prolog are usually large, intricate, and incorpórate sophisticated optimizations. This makes them difñcult to code, optimize, and, especially, maintain and extend. This is partly due to the fact that efñciency considerations make it necessary to use low-level languages in their implementation. Writing the abstract machine (and ancillary code) in a higher-level language can help harness this inherent complexity. In this paper we show how the semantics of basic components of an efficient virtual machine for Prolog can be described using (a variant of) Prolog which retains much of its semantics. These descriptions are then compiled to C and assembled to build a complete bytecode emulator. Thanks to the high level of the language used and its closeness to Prolog the abstract machine descriptions can be manipulated using standard Prolog compilation and optimization techniques with relative ease. We also show how, by applying program transformations selectively, we obtain abstract machine implementations whose performance can match and even exceed that of highly-tuned, hand-crafted emulators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe the current status of and provide preliminary performance results for a compiler of Prolog to C. The compiler is novel in that it is designed to accept different kinds of high-level information (typically obtained via an analysis of the initial Prolog program and expressed in a standardized language of assertions) and use this information to optimize the resulting C code, which is then further processed by an off-the-shelf C compiler. The basic translation process used essentially mimics an unfolding of a C-coded bytecode emúlator with respect to the particular bytecode corresponding to the Prolog program. Optimizations are then applied to this unfolded program. This is facilitated by a more flexible design of the bytecode instructions and their lower-level components. This approach allows reusing a sizable amount of the machinery of the bytecode emulator: ancillary pieces of C code, data definitions, memory management routines and áreas, etc., as well as mixing bytecode emulated code with natively compiled code in a relatively straightforward way We report on the performance of programs compiled by the current versión of the system, both with and without analysis information.