919 resultados para cost-per-wear model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Houston region is home to arguably the largest petrochemical and refining complex anywhere. The effluent of this complex includes many potentially hazardous compounds. Study of some of these compounds has led to recognition that a number of known and probable carcinogens are at elevated levels in ambient air. Two of these, benzene and 1,3-butadiene, have been found in concentrations which may pose health risk for residents of Houston.^ Recent popular journalism and publications by local research institutions has increased the interest of the public in Houston's air quality. Much of the literature has been critical of local regulatory agencies' oversight of industrial pollution. A number of citizens in the region have begun to volunteer with air quality advocacy groups in the testing of community air. Inexpensive methods exist for monitoring of ozone, particulate matter and airborne toxic ambient concentrations. This study is an evaluation of a technique that has been successfully applied to airborne toxics.^ This technique, solid phase microextraction (SPME), has been used to measure airborne volatile organic hydrocarbons at community-level concentrations. It is has yielded accurate and rapid concentration estimates at a relatively low cost per sample. Examples of its application to measurement of airborne benzene exist in the literature. None have been found for airborne 1,3-butadiene. These compounds were selected for an evaluation of SPME as a community-deployed technique, to replicate previous application to benzene, to expand application to 1,3-butadiene and due to the salience of these compounds in this community. ^ This study demonstrates that SPME is a useful technique for quantification of 1,3-butadiene at concentrations observed in Houston. Laboratory background levels precluded recommendation of the technique for benzene. One type of SPME fiber, 85 μm Carboxen/PDMS, was found to be a sensitive sampling device for 1,3-butadiene under temperature and humidity conditions common in Houston. This study indicates that these variables affect instrument response. This suggests the necessity of calibration within specific conditions of these variables. While deployment of this technique was less expensive than other methods of quantification of 1,3-butadiene, the complexity of calibration may exclude an SPME method from broad deployment by community groups.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Maine implemented a hospital rate-setting program in 1984 at approximately the same time as Medicare started the Prospective Payment System (PPS). This study examines the effectiveness of the program in controlling cost over the period 1984-1989. Hospital costs in Maine are compared to costs in 36 non rate-setting states and 11 other rate-setting states. Changes in cost per equivalent admission, adjusted patient day, per capita, admissions, and length of stay are described and analyzed using multivariate techniques. A number of supply and demand variables which were expected to influence costs independently of rate-setting were controlled for in the study. Results indicate the program was effective in containing costs measured in terms of cost per adjusted patient day. However, this was not true for the other two cost variables. The average length of stay increased during the period in Maine hospitals indicating an association with rate-setting. Several supply variables, especially the number of beds per 1,000 population were strongly associated with the cost and use of hospitals. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This investigation compares two different methodologies for calculating the national cost of epilepsy: provider-based survey method (PBSM) and the patient-based medical charts and billing method (PBMC&BM). The PBSM uses the National Hospital Discharge Survey (NHDS), the National Hospital Ambulatory Medical Care Survey (NHAMCS) and the National Ambulatory Medical Care Survey (NAMCS) as the sources of utilization. The PBMC&BM uses patient data, charts and billings, to determine utilization rates for specific components of hospital, physician and drug prescriptions. ^ The 1995 hospital and physician cost of epilepsy is estimated to be $722 million using the PBSM and $1,058 million using the PBMC&BM. The difference of $336 million results from $136 million difference in utilization and $200 million difference in unit cost. ^ Utilization. The utilization difference of $136 million is composed of an inpatient variation of $129 million, $100 million hospital and $29 million physician, and an ambulatory variation of $7 million. The $100 million hospital variance is attributed to inclusion of febrile seizures in the PBSM, $−79 million, and the exclusion of admissions attributed to epilepsy, $179 million. The former suggests that the diagnostic codes used in the NHDS may not properly match the current definition of epilepsy as used in the PBMC&BM. The latter suggests NHDS errors in the attribution of an admission to the principal diagnosis. ^ The $29 million variance in inpatient physician utilization is the result of different per-day-of-care physician visit rates, 1.3 for the PBMC&BM versus 1.0 for the PBSM. The absence of visit frequency measures in the NHDS affects the internal validity of the PBSM estimate and requires the investigator to make conservative assumptions. ^ The remaining ambulatory resource utilization variance is $7 million. Of this amount, $22 million is the result of an underestimate of ancillaries in the NHAMCS and NAMCS extrapolations using the patient visit weight. ^ Unit cost. The resource cost variation is $200 million, inpatient is $22 million and ambulatory is $178 million. The inpatient variation of $22 million is composed of $19 million in hospital per day rates, due to a higher cost per day in the PBMC&BM, and $3 million in physician visit rates, due to a higher cost per visit in the PBMC&BM. ^ The ambulatory cost variance is $178 million, composed of higher per-physician-visit costs of $97 million and higher per-ancillary costs of $81 million. Both are attributed to the PBMC&BM's precise identification of resource utilization that permits accurate valuation. ^ Conclusion. Both methods have specific limitations. The PBSM strengths are its sample designs that lead to nationally representative estimates and permit statistical point and confidence interval estimation for the nation for certain variables under investigation. However, the findings of this investigation suggest the internal validity of the estimates derived is questionable and important additional information required to precisely estimate the cost of an illness is absent. ^ The PBMC&BM is a superior method in identifying resources utilized in the physician encounter with the patient permitting more accurate valuation. However, the PBMC&BM does not have the statistical reliability of the PBSM; it relies on synthesized national prevalence estimates to extrapolate a national cost estimate. While precision is important, the ability to generalize to the nation may be limited due to the small number of patients that are followed. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ocean Drilling Program Leg 129 recovered chert, porcellanite, and radiolarite from Middle Jurassic to lower Miocene strata from the western Pacific that formed by different processes and within distinct host rocks. These cherts and porcellanites formed by (1) replacement of chalk or limestone, (2) silicification and in-situ silica phase-transformation of bedded clay-bearing biosiliceous deposits, (3) high-temperature silicification adjacent to volcanic flows or sills, and (4) silica phase-transformation of mixed biosiliceous-volcaniclastic sediments. Petrologic and O-isotopic studies highlight the key importance of permeability and time in controlling the formation of dense cherts and porcellanites. The formation of dense, vitreous cherts apparently requires the local addition and concentration of silica. The influence of permeability is shown by two examples, in which: (1) fragments of originally identical radiolarite that were differentially isolated from pore-water circulation by cement-filled fractures were silicified to different degrees, and (2) by the development of secondary porosity during the opal-CT to quartz inversion within conditions of negligible permeability. The importance of time is shown by the presence of quartz chert below, but not above, a Paleogene hiatus at Site 802, indicating that between 30 and 52 m.y. was required for the formation of quartz chert within calcareous-siliceous sediments. The oxygen-isotopic composition for all Leg 129 carbonate- and Fe/Mn-oxide-free whole-rock samples of chert and porcellanite range widely from d18O = 27.8 per mil to 39.8 per mil vs. V-SMOW. Opal-CT samples are consistently richer in 18O (34.1 per mil to 39.3 per mil) than quartz subsamples (27.8 per mil to 35.7 per mil). Using the O-isotopic fractionation expression for quartz-water of Knauth and Epstein (1976) and assuming d18Opore water = -1.0 per mil, model temperatures of formation are 7°-26°C for carbonate-replacement quartz cherts, 22°-25°C for bedded quartz cherts, and 32°-34°C for thermal quartz cherts. Large variations in O-isotopic composition exist at the same burial depth between co-existing silica phases in the same sample and within the same phase in adjacent lithologies. For example, quartz has a wide range of isotopic compositions within a single breccia sample; d18O = 33.4 per mil and 28.0 per mil for early and late stages of fracture-filling cementation, and 31.6 per mil and 30.2 per mil for microcrystalline quartz precipitation within enclosed chert and radiolarite fragments. Similarly, opal-CT d101 spacing varies across lithologic or diagenetic boundaries within single samples. Co-occurring opal-CT and chalcedonic quartz in shallowly buried chert and porcellanite from Sites 800 and 801 have an 8.7 per mil difference in d18O, suggesting that pore waters in the Pigafetta Basin underwent a Tertiary shift to strongly 18O-depleted values due to alteration of underlying Aptian to Albian-Cenomanian volcaniclastic deposits after opal-CT precipitation, but prior to precipitation of microfossil-filling chalcedony.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En la actualidad, el interés por las plantas de potencia de ciclo combinado de gas y vapor ha experimentado un notable aumento debido a su alto rendimiento, bajo coste de generación y rápida construcción. El objetivo fundamental de la tesis es profundizar en el conocimiento de esta tecnología, insuficientemente conocida hasta el momento debido al gran número de grados de libertad que existen en el diseño de este tipo de instalaciones. El estudio se realizó en varias fases. La primera consistió en analizar y estudiar las distintas tecnologías que se pueden emplear en este tipo de centrales, algunas muy recientes o en fase de investigación, como las turbinas de gas de geometría variable, las turbinas de gas refrigeradas con agua o vapor del ciclo de vapor o las calderas de paso único que trabajan con agua en condiciones supercríticas. Posteriormente se elaboraron los modelos matemáticos que permiten la simulación termodinámica de cada uno de los componentes que integran las plantas, tanto en el punto de diseño como a cargas parciales. Al mismo tiempo, se desarrolló una metodología novedosa que permite resolver el sistema de ecuaciones que resulta de la simulación de cualquier configuración posible de ciclo combinado. De esa forma se puede conocer el comportamiento de cualquier planta en cualquier punto de funcionamiento. Por último se desarrolló un modelo de atribución de costes para este tipo de centrales. Con dicho modelo, los estudios se pueden realizar no sólo desde un punto de vista termodinámico sino también termoeconómico, con lo que se pueden encontrar soluciones de compromiso entre rendimiento y coste, asignar costes de producción, determinar curvas de oferta, beneficios económicos de la planta y delimitar el rango de potencias donde la planta es rentable. El programa informático, desarrollado en paralelo con los modelos de simulación, se ha empleado para obtener resultados de forma intensiva. El estudio de los resultados permite profundizar ampliamente en el conocimiento de la tecnología y, así, desarrollar una metodología de diseño de este tipo de plantas bajo un criterio termoeconómico. ABSTRACT The growing energy demand and the need of shrinking costs have led to the design of high efficiency and quick installation power plants. The success of combined cycle gas turbine power plants lies on their high efficiency, low cost and short construction lead time. The main objective of the work is to study in detail this technology, which is not thoroughly known owing to the great number of degrees of freedom that exist in the design of this kind of power plants. The study is divided into three parts. Firstly, the different technologies and components that could be used in any configuration of a combined cycle gas turbine power plant are studied. Some of them could be of recent technology, such as the variable inlet guide vane compressors, the H-technology for gas turbine cooling or the once-through heat recovery steam generators, used with water at supercritical conditions. Secondly, a mathematical model has been developed to simulate at full and part load the components of the power plant. At the same time, a new methodology is proposed in order to solve the equation system resulting for any possible power plant configuration. Therefore, any combined cycle gas turbine could be simulated at any part load condition. Finally a themoeconomic model is proposed. This model allows studying the power plant not only from a thermodynamic point of view but also from a thermoeconomic one. Likewise, it allows determining the generating costs or the cash flow, thus achieving a trade off between efficiency and cost. Likewise, the model calculates the part load range where the power plant is profitable. Once the thermodynamic and thermoeconomic models are developed, they are intensively used in order to gain knowledge in the combined cycle gas turbine technology and, in this way, to propose a methodology aimed at the design of this kind of power plants from a thermoeconomic point of view.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Transportation modes produce many external costs such as congestion, accidents, and environmental impacts (pollution, noise and so on). From the microeconomic theory it is well known that in order to maximize social welfare, transportation modes should internalize the marginal costs they produce. Allocative efficiency is achieved when all transportation modes are priced at their social marginal cost. The objective of this research is to evaluate to what extent different passenger transport modes internalize their social marginal costs. This analysis is important since it affects the competitiveness of the different transport modes for a given OD pair. The case study analyzed is the corridor Madrid-Barcelona in Spain and the different transport modes have been considered (cars, buses, high-speed train and air). The research calculates the marginal social cost per user for each transportation mode, and it compares it with the average fare—allowing for the effect of discriminatory taxes—currently paid by the users. The external costs are calculated according to the guidelines established by the European Union. The gap between the marginal social cost and the price paid by users will provide the extra cost per passenger that each transport mode should have to pay for internalizing the external cost it produces. The research shows that external costs already produced by road and air transport modes are much higher than those produced by rail. However, the results show that road transport already internalizes every external costs it produces because users pay high fuel taxes. In other words, although rail transportation produces lower external costs, road transportation pays more than it should on the basis of the social marginal costs. The results of this work might be of help for Europ ean policy actions to be undertaken in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El proyecto que se presenta a continuación recoge la adaptación de una Central Térmica de carbón al cumplimiento de la DIRECTIVA 2010/75/UE DEL PARLAMENTO EUROPEO Y DEL CONSEJO de 24 de noviembre de 2010 sobre las emisiones industriales. La Central sobre la que se realiza el proyecto tiene un grupo térmico de carbón suscritico refrigerado por agua, con una potencia a plena carga de 350 MWe y de 190 MWe a mínimo técnico. Genera 1 090 t/h de vapor a 540 °C y 168 kg/cm2 funcionando a plena carga. Actualmente las emisiones de NOx son de 650 mg/m3, (condiciones normales, seco, 6 % O2). El objeto del proyecto es reducir estas emisiones a un valor máximo de 200 mg/m3 en las mismas condiciones. El proyecto analiza detalladamente las condiciones actuales de operación de la instalación en cuanto a combustible utilizado, horas de funcionamiento, condiciones climáticas y producción. Se analiza así mismo, todas las técnicas disponibles en mercado para la reducción del NOx, diferenciando entre medidas primarias (actúan sobre los efectos de formación) y secundarias (limpieza de gases). Las medidas primarias ya están implementadas en la central, por tanto, el proyecto plantea la reducción con medidas secundarias. De las medidas secundarias analizadas se ha seleccionado la instalación de un Reactor de Reducción Selectiva Catalítica (Reactor SCR). Tras un análisis de los diferentes reactores y catalizadores disponibles se ha seleccionado un reactor de configuración High-dust, una disposición de catalizador en 3 capas más 1, cuyos componentes están basados en óxidos metálicos (TiO2, V2O5, WO3) y estructura laminar. Se ha buscado la instalación del reactor para operar a una temperatura inferior a 450 °C. Como agente reductor se ha seleccionado NH3 a una dilución del 24,5 %. El proyecto recoge también el diseño de todo el sistema de almacenamiento, evaporación, dilución e inyección de amoniaco. El resultado del proyecto garantiza una concentración en los gases de salida por la chimenea inferior 180 mg/m3(n) de NOx. La reducción del NOx a los límites establecidos, tienen un coste por MWh neto generado para la central, trabajando 60 % a plena carga y 40 % a mínimo técnico y una amortización de 10 años, de 4,10 €/MWh. ABSTRACT The following project shows the compliance adjustment of a coal-fired power station to the 2010/75/EU Directive of the European Parliament and Council 24th November 2010 on industrial emissions. The project is based on a power station with a subcritical thermal coal unit, cooled with water, with a maximum power of 350 MWe and a technical minimum of 190 MWe. It produces 1 090 t/h of steam at 540 ° C and 168 kg/cm2 operating under full load. Currently, NOx emissions are 650 mg / m3 (normal conditions, dry, 6% O2). The project aims to reduce these emissions to a maximum value of 200 mg / m3 under the same conditions. The project analyses in detail the current operating conditions of the system in terms of fuel used, hours of operation, climatic conditions and production. In addition, it also analyses every available technique of NOx reduction on the market, distinguishing between primary (acting on the effects of formation) and secondary measures (gas cleaning). Primary measures are already implemented in the plant, thus proposing reduction with secondary measures. Among the secondary measures analyzed, it has been selected to install a Selective Catalytic Reduction Reactor (SCR Reactor). Having researched the different reactors and catalysts available, for the reactor has been selected High-dust configuration, an arrangement of catalyst in 3 layers plus 1, whose components are based on metal oxides (TiO2, V2O5, WO3) and laminar structure. The reactor has been sought facility to operate at a temperature below 450 ° C. NH3 diluted to 24,5 % has been selected as reducing agent. The project also includes the design of the entire storage system, evaporation, dilution and ammonia injection. The results of the project ensure a gas concentration in the lower chimney exit below 180 mg / m3(n) NOx. The reduction of NOx to the established limits has a cost per net MWh generated in the plant, working at 60% of full load and at 40% of technical minimum, with an amortization of 10 years, 4,10 € / MWh.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este estudio ofrece una herramienta de aproximación al espacio morfológico-métrico en el que se formula la ciudad de alta densidad desde la vivienda colectiva. La vivienda colectiva es la célula básica de la ciudad. El estudio configurativo y dimensional del tejido urbano muestra la importancia del fondo edificatorio como parámetro clave a mitad de camino entre la vivienda y la ciudad. El fondo edificatorio traza el margen de la arquitectura en la ciudad y desde él se equipa y cuantifica el territorio urbano. Sus dinámicas van caracterizando los distintos entornos, mientras en su interior se formula el tipo en un ajuste de continua verificación y adaptación. La forma de la ciudad y sus distintas posibilidades configurativas —en cuanto masa construida y espacio público, pero sin perder de vista la relación entre ambos— depende en gran medida del fondo edificatorio. Se trata, por tanto, de un parámetro importante de relación entre las distintas configuraciones del espacio exterior e interior. Al proyectar, una vez establecido un fondo, algunas propiedades se adaptan con facilidad mientras que otras requieren un cierto grado de interpretación o deben ser descartadas. Dada una superficie, la especificación del fondo fuerza la dimensión del frente en las configuraciones posibles. Ambas dimensiones son vitales en el valor del factor de forma del continuo edificado y en su relación se produce el complejo rango de posibilidades. Partiendo de la ciudad, un gran fondo encierra y mezcla en su interior todo tipo de usos sin distinción, repercute un menor coste por unidad de superficie edificada y comparte su frente reduciendo los intercambios térmicos y lumínicos. Sin embargo la ciudad de fondo reducido ajusta la forma al uso y se desarrolla linealmente con repetitividad a lo largo de sus frentes exteriores. En ella, el fuerte intercambio energético se opone a las grandes posibilidades del espacio libre. En cambio desde la casa las distintas medidas del fondo se producen bajo determinados condicionantes: clima, compacidad, ocupación, hibridación, tamaño de casa, etc., mientras que el tipo se desarrolla en base a una métrica afín. Este trabajo parte de esta dialéctica. Estudia la relación de dependencia entre las condiciones del edificio de viviendas y su métrica. Jerarquiza edificios en base al parámetro “fondo” para constituir una herramienta que como un ábaco sea capaz de visibilizar las dinámicas relacionales entre configuración y métrica bajo la condición de alta densidad. Para ello en una primera fase se gestiona una extensa muestra de edificios representativos de vivienda colectiva principalmente europea, extraída de tres prestigiosos libros en forma de repertorio. Se ordenan y categorizan extrayendo datos conmensurables y temas principales que ligan la profundidad de la huella a la morfología y posteriormente, esta información se estudia en diagramas que ponen de manifiesto convergencias y divergencias, acumulaciones y vacíos, límites, intervalos característicos, márgenes y ejes, parámetros y atributos... cuya relación trata de factorizar el lugar morfológico y métrico de la casa como metavivienda y ciudad. La herramienta se establece así como un complejo marco relacional en el que posicionar casos concretos y trazar nexos transversales, tanto de tipo morfológico como cultural, climático o técnico, normativo o tecnológico. Cada nuevo caso o traza añadida produce consonancias y disonancias en el marco que requieren interpretación y verificación. De este modo este instrumento de análisis comparativo se tempera, se especializa, se completa y se perfecciona con su uso. La forma de la residencia en la ciudad densa se muestra así sobre un subsistema morfológico unitario y su entendimiento se hace más fácilmente alcanzable y acumulable tanto para investigaciones posteriores como para el aprendizaje o el ejercicio profesional. ABSTRACT This research study offers a tool to approach the morphometric space in which (multi-family) housing defines high-density cities. Multi-family housing is the basic cell of the city. The configuration and dimension studies of the urban fabric render the importance of building depth as a key parameter half way between the dwelling and the city. The building depth traces de limit of architecture in the city. It qualifies and quantifies the urban territory. Its dynamics characterize the different environments while in its essence, an adjustment process of continuous verification and adaption defines type. The shape of the city and its different configuration possibilities —in terms of built fabric and public space, always keeping an eye on the relationship between them— depend majorly on the building depth. Therefore, it is a relevant parameter that relates the diverse configurations between interior and exterior space. When designing, once the depth is established, some properties are easily adpated. However, others require a certain degree of interpretation or have to be left out of the study. Given a ceratin surface, the establishment of the depth forces the dimensions of the facade in the different configurations. Both depth and facade dimensions are crucial for the form factor of the built mass. Its relationship produces a complex range of possibilities. From an urban point of view, great depth means multiple uses (making no distinction whatsoever,) it presents a lower cost per unit of built area and shares its facade optimizing temperature and light exchange. On the contrary, the city of reduced depth adjusts its shape to the use, and develops linearly and repetitively along its facades. The strong energy exchange opposes to the great possibilities of free space. From the perspective of the dwelling, the different dimensions of depth are produced under certain determinants: climate, compactness, occupancy, hybridization, dwelling size, etc. Meanwhile, the type is developed based on a related meter (as in poetry). This work starts from the previous premise. It studies the dependency relation bewteen the conditions of the dwellings and their meter (dimensions). It organizes buildings hierarchically based on the parameter “depth” to create a tool that, as an abacus, is able to visibilise the relational dynamics between configuration and dimension in high density conditions. For this, in the first stage a large group of representative multi-family housing buildings is managed, mostly from Europe, picked from three prestigious books as a repertoir. They are categorized and ordered drawing commensurable data and key issues that link the depth of the fooprint to its morphology. Later, this information is studied deeply with diagrams that bring out connections and discrepancies, voids and accumulations, limits, charasteristic intervals, margins and axii, parameters, attributes, etc. These relationships try to create factors from a morphological and metrical point of view of the house as a metadwelling. This tool is established as a complex relation frame in which case studies are postitioned and cross-cutting nexii are traced. These can deal with morphology, climate, technique, law or technology. Each new case or nexus produces affinities and discrepancies that require interpretation and verification. Thus, this instrument of comparative analysis is fine-tuned, especialized and completed as its use is improved. The way housing is understood in high density cities is shown as a unitary metric subsystem and its understanding is easy to reach and accumulate for future researchers, students or practicing architects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUÇÃO: A prótese biliar endoscópica é aceita em todo o mundo como a primeira escolha de tratamento paliativo na obstrução biliar maligna. Atualmente ainda persistem dois tipos de materiais utilizados em sua confecção: plástico e metal. Consequentemente, muitas dúvidas surgem quanto a qual deles é o mais benéfico para o paciente. Esta revisão reúne as informações disponíveis da mais alta qualidade sobre estes dois tipos de prótese, fornecendo informações em relação à disfunção, complicação, taxas de reintervenção, custos, sobrevida e tempo de permeabilidade; e pretende ajudar a lidar com a prática clínica nos dias de hoje. OBJETIVO: Analisar, através de metanálise, os benefícios de dois tipos de próteses na obstrução biliar maligna inoperável. MÉTODOS: Uma revisão sistemática de ensaios clínicos randomizados (RCT) foi conduzida, com a última atualização em março de 2015, utilizando EMBASE, CINAHL (EBSCO), Medline, Lilacs / Centro (BVS), Scopus, o CAPES (Brasil), e literatura cinzenta. As informações dos estudos selecionados foram extraídas tendo em vista seis desfechos: primariamente disfunção, taxas de reintervenção e complicações; e, secundariamente, custos, sobrevivência e tempo de permeabilidade. Os dados sobre as características dos participantes do RCT, critérios de inclusão e exclusão e tipos de próteses também foram extraídos. Os vieses foram avaliados principalmente através da escala de Jadad. Esta metanálise foi registrada no banco de dados PROSPERO pelo número CRD42014015078. A análise do risco absoluto dos resultados foi realizada utilizando o software RevMan 5, calculando as diferenças de risco (RD) de variáveis dicotômicas e média das diferenças (MD) de variáveis contínuas. Os dados sobre a RD e MD para cada desfecho primário foram calculados utilizando o teste de Mantel-Haenszel e a inconsistência foi avaliada com o teste Qui-quadrado (Chi2) e o método de Higgins (I2). A análise de sensibilidade foi realizada com a retirada de estudos discrepantes e a utilização do efeito aleatório. O teste t de Student foi utilizado para a comparação das médias aritméticas ponderadas, em relação aos desfechos secundários. RESULTADOS: Inicialmente foram identificados 3660 estudos; 3539 foram excluídos por título ou resumo, enquanto 121 estudos foram totalmente avaliados e foram excluídos, principalmente por não comparar próteses metálicas (SEMS) e próteses plásticas (PS), levando a treze RCT selecionados e 1133 indivíduos metanálise. A média de idade foi de 69,5 anos, e o câncer mais comum foi de via biliar (proximal) e pancreático (distal). O diâmetro de SEMS mais utilizado foi de 10 mm (30 Fr) e o diâmetro de PS mais utilizado foi de 10 Fr. Na metanálise, SEMS tiveram menor disfunção global em comparação com PS (21,6% versus 46,8% p < 0,00001) e menos reintervenções (21,6% versus 56,6% p < 0,00001), sem diferença nas complicações (13,7% versus 15,9% p = 0,16). Na análise secundária, a taxa média de sobrevida foi maior no grupo SEMS (182 contra 150 dias - p < 0,0001), com um período maior de permeabilidade (250 contra 124 dias - p < 0,0001) e um custo semelhante por paciente, embora menor no grupo SEMS (4.193,98 contra 4.728,65 Euros - p < 0,0985). CONCLUSÃO: SEMS estão associados com menor disfunção, menores taxas de reintervenção, melhor sobrevida e maior tempo de permeabilidade. Complicações e custos não apresentaram diferença

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trabalho Final do Curso de Mestrado Integrado em Medicina, Faculdade de Medicina, Universidade de Lisboa, 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Os incêndios florestais constituem uma realidade que pode ser minimizada com o auxílio Forças Armadas, através de homens e de meios. O Projeto FIREND®, no qual este trabalho está inserido, tem como objetivo projetar uma munição que ao libertar uma substância química sobre o fogo, o impeça de se expandir e se possível, o extinga. O projétil tem 155mm de calibre e o seu compartimento de carga terá cerca de 7,5

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 1999, the Department of Health in Western Australia began a telehealth project, which finished in 2004. The 75 videoconferencing sites funded by the project were part of a total state-wide videoconference network of 104 sites. During the period from January 2002 to December 2003, a total of 3266 consultations, case reviews and patient education sessions took place. Clinical use grew to 30% of all telehealth activity. Educational use was approximately 40% (1416 sessions) and management use was about 30% (1031 sessions). The average overhead cost per telehealth session across all regions and usage types was $A192. Meaningful comparisons of the results of the present study with other public health providers were difficult, because many of the available Websites on telehealth were out of date. Despite the successful use of telehealth to deliver clinical services in Western Australia, sustaining the effort in the post-project phase will present significant challenges.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The aim of this study was to determine the effects of carvedilol on the costs related to the treatment of severe chronic heart failure (CHF). Methods: Costs for the treatment for heart failure within the National Health Service (NHS) in the United Kingdom (UK) were applied to resource utilisation data prospectively collected in all patients randomized into the Carvedilol Prospective Randomized Cumulative Survival (COPERNICUS) Study. Unit-specific, per them (hospital bed day) costs were used to calculate expenditures due to hospitalizations. We also included costs of carvedilol treatment, general practitioner surgery/office visits, hospital out-patient clinic visits and nursing home care based on estimates derived from validated patterns of clinical practice in the UK. Results: The estimated cost of carvedilol therapy and related ambulatory care for the 1156 patients assigned to active treatment was 530,771 pound (44.89 pound per patient/month of follow-up). However, patients assigned to carvedilol were hospitalised less often and accumulated fewer and less expensive days of admission. Consequently, the total estimated cost of hospital care was 3.49 pound million in the carvedilol group compared with 4.24 pound million for the 1133 patients in the placebo arm. The cost of post-discharge care was also less in the carvedilol than in the placebo group (479,200 pound vs. 548,300) pound. Overall, the cost per patient treated in the carvedilol group was 3948 pound compared to 4279 pound in the placebo group. This equated to a cost of 385.98 pound vs. 434.18 pound, respectively, per patient/month of follow-up: an 11.1% reduction in health care costs in favour of carvedilol. Conclusions: These findings suggest that not only can carvedilol treatment increase survival and reduce hospital admissions in patients with severe CHF but that it can also cut costs in the process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Click-through rate is considered a very important metric and a key performance indicator of the success of online advertising and is the most frequently used measure to gauge the effectiveness of banner advertising. Marketers also use click-through rates in arriving at performance measurement activities such as the calculation of 'customer life time value' and 'customer acquisition cost'. Click-through is the second most frequently used banner ad pricing method after cost per thousand impressions. Online advertising is facing a new form of challenge – the artificial inflation of click-through rates. We call this practice 'cyber-rigging'. The objective of this paper is to explore the ethical dimensions of cyber-rigging through application of ethical principles and theories.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowledge maintenance is a major challenge for both knowledge management and the Semantic Web. Operating over the Semantic Web, there will be a network of collaborating agents, each with their own ontologies or knowledge bases. Change in the knowledge state of one agent may need to be propagated across a number of agents and their associated ontologies. The challenge is to decide how to propagate a change of knowledge state. The effects of a change in knowledge state cannot be known in advance, and so an agent cannot know who should be informed unless it adopts a simple ‘tell everyone – everything’ strategy. This situation is highly reminiscent of the classic Frame Problem in AI. We argue that for agent-based technologies to succeed, far greater attention must be given to creating an appropriate model for knowledge update. In a closed system, simple strategies are possible (e.g. ‘sleeping dog’ or ‘cheap test’ or even complete checking). However, in an open system where cause and effect are unpredictable, a coherent cost-benefit based model of agent interaction is essential. Otherwise, the effectiveness of every act of knowledge update/maintenance is brought into question.