965 resultados para regularly entered default judgment set aside without costs
Resumo:
In this work we are concerned with the analysis and numerical solution of Black-Scholes type equations arising in the modeling of incomplete financial markets and an inverse problem of determining the local volatility function in a generalized Black-Scholes model from observed option prices. In the first chapter a fully nonlinear Black-Scholes equation which models transaction costs arising in option pricing is discretized by a new high order compact scheme. The compact scheme is proved to be unconditionally stable and non-oscillatory and is very efficient compared to classical schemes. Moreover, it is shown that the finite difference solution converges locally uniformly to the unique viscosity solution of the continuous equation. In the next chapter we turn to the calibration problem of computing local volatility functions from market data in a generalized Black-Scholes setting. We follow an optimal control approach in a Lagrangian framework. We show the existence of a global solution and study first- and second-order optimality conditions. Furthermore, we propose an algorithm that is based on a globalized sequential quadratic programming method and a primal-dual active set strategy, and present numerical results. In the last chapter we consider a quasilinear parabolic equation with quadratic gradient terms, which arises in the modeling of an optimal portfolio in incomplete markets. The existence of weak solutions is shown by considering a sequence of approximate solutions. The main difficulty of the proof is to infer the strong convergence of the sequence. Furthermore, we prove the uniqueness of weak solutions under a smallness condition on the derivatives of the covariance matrices with respect to the solution, but without additional regularity assumptions on the solution. The results are illustrated by a numerical example.
Resumo:
We deal with five problems arising in the field of logistics: the Asymmetric TSP (ATSP), the TSP with Time Windows (TSPTW), the VRP with Time Windows (VRPTW), the Multi-Trip VRP (MTVRP), and the Two-Echelon Capacitated VRP (2E-CVRP). The ATSP requires finding a lest-cost Hamiltonian tour in a digraph. We survey models and classical relaxations, and describe the most effective exact algorithms from the literature. A survey and analysis of the polynomial formulations is provided. The considered algorithms and formulations are experimentally compared on benchmark instances. The TSPTW requires finding, in a weighted digraph, a least-cost Hamiltonian tour visiting each vertex within a given time window. We propose a new exact method, based on new tour relaxations and dynamic programming. Computational results on benchmark instances show that the proposed algorithm outperforms the state-of-the-art exact methods. In the VRPTW, a fleet of identical capacitated vehicles located at a depot must be optimally routed to supply customers with known demands and time window constraints. Different column generation bounding procedures and an exact algorithm are developed. The new exact method closed four of the five open Solomon instances. The MTVRP is the problem of optimally routing capacitated vehicles located at a depot to supply customers without exceeding maximum driving time constraints. Two set-partitioning-like formulations of the problem are introduced. Lower bounds are derived and embedded into an exact solution method, that can solve benchmark instances with up to 120 customers. The 2E-CVRP requires designing the optimal routing plan to deliver goods from a depot to customers by using intermediate depots. The objective is to minimize the sum of routing and handling costs. A new mathematical formulation is introduced. Valid lower bounds and an exact method are derived. Computational results on benchmark instances show that the new exact algorithm outperforms the state-of-the-art exact methods.
Resumo:
Il CMV è l’agente patogeno più frequente dopo trapianto (Tx) di cuore determinando sia sindromi cliniche organo specifiche sia un danno immunomediato che può determinare rigetto acuto o malattia coronarica cronica (CAV). I farmaci antivirali in profilassi appaiono superiori all’approccio pre-sintomatico nel ridurre gli eventi da CMV, ma l’effetto anti-CMV dell’everolimus (EVE) in aggiunta alla profilassi antivirale non è stato ancora analizzato. SCOPO DELLO STUDIO: analizzare l’interazione tra le strategie di profilassi antivirale e l’uso di EVE o MMF nell’incidenza di eventi CMV correlati (infezione, necessità di trattamento, malattia/sindrome) nel Tx cardiaco. MATERIALI E METODI: sono stati inclusi pazienti sottoposti a Tx cardiaco e trattati con EVE o MMF e trattamento antivirale di profilassi o pre-sintomatico. L’infezione da CMV è stata monitorata con antigenemia pp65 e PCR DNA. La malattia/sindrome da CMV è stato considerato l’endpoint principale. RISULTATI: 193 pazienti (di cui 10% D+/R-) sono stati inclusi nello studio (42 in EVE e 149 in MMF). Nel complesso, l’infezione da CMV (45% vs. 79%), la necessità di trattamento antivirale (20% vs. 53%), e la malattia/sindrome da CMV (2% vs. 15%) sono risultati significativamente più bassi nel gruppo EVE che nel gruppo MMF (tutte le P<0.01). La profilassi è più efficace nel prevenire tutti gli outcomes rispetto alla strategia pre-sintomatica nei pazienti in MMF (P 0.03), ma non nei pazienti in EVE. In particolare, i pazienti in EVE e strategia pre-sintomatica hanno meno infezioni da CMV (48 vs 70%; P=0.05), e meno malattia/sindrome da CMV (0 vs. 8%; P=0.05) rispetto ai pazienti in MMF e profilassi. CONCLUSIONI: EVE riduce significamene gli eventi correlati al CMV rispetto al MMF. Il beneficio della profilassi risulta conservato solo nei pazienti trattati con MMF mentre l’EVE sembra fornire un ulteriore protezione nel ridurre gli eventi da CMV senza necessità di un estensivo trattamento antivirale.
Resumo:
Le considerazioni sviluppate in questo scritto si pongono come obiettivo quello di fare chiarezza sul delicato tema delle opere di urbanizzazione a scomputo. La normativa concernente la realizzazione delle opere pubbliche a scomputo totale o parziale degli oneri di urbanizzazione è stata oggetto di svariate modifiche e interpretazioni giurisprudenziali, che si sono susseguite dopo l'importante pronuncia della Corte di Giustizia Europea. E' con questa sentenza che i Giudici del Kirchberg introducono un particolare obbligo procedurale a carico dei privati: nel caso in cui singole opere superino i valori di rilevanza europea, esse devono essere affidate, applicando le procedure di gara previste dalla direttiva 37/93/CEE. Va precisato che sino a quel momento l'affidamento diretto delle opere al privato costituiva nell'ottica del Legislatore lo strumento per realizzare le infrastrutture necessarie per consentire gli insediamenti edilizi che la pubblica amministrazione spesso non era in grado di effettuare. In questo panorama legislativo la sentenza della Corte di Giustizia, appare del tutto dirompente. Infatti, introducendo il principio secondo cui anche la realizzazione diretta delle opere di urbanizzazione da parte del privato deve sottostare alle regole delle procedure europee in materia di appalti, mette inevitabilmente a confronto due normative, quella degli appalti pubblici e quella dell'urbanistica, che sino a quel momento erano riuscite a viaggiare in modo parallelo, senza dar luogo a reciproche sovrapposizioni. Il Legislatore nazionale ha, con molta fatica, recepito il principio comunitario ed è stato negli anni quasi costretto, attraverso una serie di modifiche legislative, ad ampliarne la portata. La presente ricerca, dopo aver analizzato i vari correttivi apportati al Codice degli appalti pubblici vuole, quindi, verificare se l'attuale quadro normativo rappresenti un vero punto di equilibrio tra le contrapposte esigenze di pianificazione del territorio e di rispetto dei principi comunitari di concorrenza nella scelta del contraente.
Resumo:
Die primäre, produktive Cytomegalovirus (CMV)-Infektion wird im immunkompetenten Patienten effizient durch antivirale CD8+ T-Zellen kontrolliert. Das virale Genom besitzt jedoch die Fähigkeit, in einem nicht replikativen, Latenz genannten Zustand, in gewissen Zelltypen zu persistieren, ohne dass infektiöse Nachkommenviren produziert werden. Die molekularen Mechanismen, welche der Etablierung und Aufrechterhaltung der Latenz zugrundeliegen, sind noch weitestgehend unbekannt. Es gibt Hinweise darauf, dass zelluläre Verteidigungsmechanismen die Zirkularisierung und Chromatinisierung viraler Genome hervorrufen und dadurch die virale Genexpression größtenteils verhindert wird (Marks & Spector, 1984; Reeves et al., 2006).rnAllerdings liegen die Genome nicht in einem komplett inaktiven Zustand vor. Vielmehr konnte für das murine CMV (mCMV) bereits die sporadische Transkription der Gene ie1 und ie2 während der Latenz nachgewiesen werden (Kurz et al., 1999; Grzimek et al., 2001).rnIn der vorliegenden Arbeit wurde zum ersten Mal eine umfassende in vivo Latenz-Analyse zur Charakterisierung der viralen Transkription in einer Kinetik anhand der alle drei kinetischen Klassen repräsentierenden Transkripte IE1, IE3, E1, m164, M105 und M86 vorgenommen.rnNach Latenz-Etablierung, verifiziert durch Abwesenheit von infektiösem Virus, konnten alle getesteten Transkripte in der Lunge quantifiziert werden. Interessanterweise war die transkriptionelle Aktivität zu keinem Analyse-Zeitpunkt mit der klassischen IE-E-L-Kinetik der produktiven Infektion kompatibel. Stattdessen lag eine stochastische Transkript-Expression vor, deren Aktivität mit voranschreitender Zeit immer weiter abnahm.rnWährend der Latenz exprimierte Transkripte, die für antigene Peptide kodieren, können infizierte Zellen für das Immunsystem sichtbar machen, was zu einer fortwährenden Restimulation des memory T-Zell-pools führen würde. Durch zeitgleiche Analyse der Transkript-Expression, sowie der Frequenzen Epitop-spezifischer CD8+ T-Zellen während der Latenz (IE1, m164, M105), wurde eine möglicher Zusammenhang zwischen der transkriptionellen Aktivität und der Expansion des memory T-Zell-pools untersucht. Die weitere Charakterisierung von Subpopulationen der Epitop-spezifischen CD8+ T-Zellen identifizierte die SLECs (short-lived-effector cells; CD127low CD62Llow KLRG1high) als die dominante Population in Lunge und Milz während der mCMV-Latenz.rnIn einem weiteren Teil der Arbeit sollte untersucht werden, ob IE-Genexpression zur Etablierung von Latenz notwendig ist. Mit Hilfe der Rekombinanten mCMV-Δie2-DTR, die die Gensequenz des Diphtherietoxin-Rezeptors (DTR) anstelle des Gens ie2 trägt, konnten infizierte, DTR exprimierende Zellen durch eine DT-Applikation konditional depletiert werden.rnIm latent infizierbaren Zelltyp der Leber, den LSECs (liver sinusoidal endothelial cells) wurde die virale Load durch 90-stündige DT–Applikation nach mCMV-Δie2-DTR Infektion auf das Level latent infizierter LSECs reduziert. Diese Daten sprechen für die Hypothese eines von Beginn an inaktiven Genoms, das keine IE-Genexpression zur Latenz-Etablierung benötigt. Zusätzlich stellt dieser Ansatz ein neues Tier-Modell zur Latenz-Etablierung dar. Verringerte Wartezeiten bis zur vollständigen Latenz-Etablierung, im Vergleich zum bisherigen Knochenmarktransplantations-Modell, könnten anfallende Tierhaltungskosten erheblich reduzieren und das Voranschreiten der Forschung beschleunigen.
Resumo:
Electronic waste generated from the consumption of durable goods in developed countries is often exported to underdeveloped countries for reuse, recycling and disposal with unfortunate environmental consequences. The lack of efficient disposal policies within developing nations coupled with global free trade agreements make it difficult for consumers to internalize these costs. This paper develops a two-country model, one economically developed and the other underdeveloped, to solve for optimal tax policies necessary to achieve the efficient allocation of economic resources in an economy with a durable good available for global reuse without policy measures in the underdeveloped country. A tax in the developed country on purchases of the new durable good combined with a waste tax set below the domestic external cost of disposal is sufficient for global efficiency. The implication of allowing free global trade in electronic waste is also examined, where optimal policy resembles a global deposit-refund system.
Resumo:
In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process vs. those that measure flux through the autophagy pathway (i.e., the complete process); thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from stimuli that result in increased autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field.
Resumo:
International migration has increased rapidly in the Czech Republic, with more than 150,000 legally registered foreign residents at the end of 1996. A large proportion of these are in Prague - 35% of the total in December 1996. The aim of this project was to enrich the fund of information concerning the "environment", reasons and "mechanisms" behind immigration to the Czech Republic. Mr. Drbohlav looked first at the empirical situation and on this basis set out to test certain well-known migration theories. He focused on four main areas: 1) a detailed description and explanation of the stock of foreign citizens legally settled in Czech territory, concentrating particularly on "economic" migrants; 2) a questionnaire survey targeting a total of 192 Ukrainian workers (98 in the fall 1995 and 94 in the fall 1996) working in Prague or its vicinity; 3) a second questionnaire survey of 40 "western" firms (20 in 1996 and 20 in 1997) operating out of Prague; 4) an opinion poll on how the Czech population reacts to foreign workers in the CR. Over 80% of economic immigrants at the end of 1996 were from European countries, 16% from Asia and under 2% from North America. The largest single nationalities were Ukrainians, Slovaks, Vietnamese and Poles. There has been a huge increase in the Ukrainian immigrant community over both space (by region) and time (a ten-fold increase since 1993), and at 40,000 persons this represents one third of all legal immigrants. Indications are that many more live and work there illegally. Young males with low educational/skills levels predominate, in contrast with the more heterogeneous immigration from the "West". The primary reason for this migration is the higher wages in the Czech Republic. In 1994 the relative figures of GDP adjusted for parity of purchasing power were US$ 8,095 for the Czech Republic versus US$ 3,330 for the Ukraine as a whole and US$ 1,600 for the Zakarpatye region from which 49% of the respondents in the survey came. On an individual level, the average Czech wage is about US$ 330 per month, while 50% of the Ukrainian respondents put their last monthly wage before leaving for the Czech Republic at under US$ 27. The very low level of unemployment in the latter country (fluctuating around 4%) was also mentioned as an important factor. Migration was seen as a way of diversifying the family's source of income and 49% of the respondents had made their plans together with partners or close relatives, while 45% regularly send remittances to Ukraine (94% do so through friends or relatives). Looking at Ukrainian migration from the point of view of the dual market theory, these migrants' type and conditions of work, work load and earnings were all significantly worse than in the primary sector, which employs well educated people and offers them good earnings, job security and benefits. 53% of respondents were working and/or staying in the Czech Republic illegally at the time of the research, 73% worked as unqualified, unskilled workers or auxiliary workers, 62% worked more than 12 hours a day, and 40% evaluated their working conditions as hard. 51% had no days off, earnings were low in relation to the number of hours worked. and 85% said that their earnings did not increase over time. Nearly half the workers were recruited in Ukraine and only 4% expressed a desire to stay in the Czech Republic. Network theories were also borne out to some extent as 33% of immigrants came together with friends from the same village, town or region in Ukraine. The number who have relatives working in the Czech Republic is rising, and many wish to invite relatives or children to visit them. The presence of organisations which organised cross-border migration, including some which resort to organising illegal documents, also gives some support for the institutional theory. Mr. Drbohlav found that all the migration theories considered offered some insights on the situation, but that none was sufficient to explain it all. He also points out parallels with many other regions of the world, including Central America, South and North America, Melanesia, Indonesia, East Africa, India, the Middle East and Russia. For the survey of foreign and international firms, those chosen were largely from countries represented by more than one company and were mainly active in market services such as financial and trade services, marketing and consulting. While 48% of the firms had more than 10,000 employees spread through many countries, more than two thirds had fewer than 50 employees in the Czech Republic. Czechs formed 80% plus of general staff in these firms although not more than 50% of senior management, and very few other "easterners" were employed. All companies absolutely denied employing people illegally. The average monthly wage of Czech staff was US$ 850, with that of top managers from the firm's "mother country" being US$ 6,350 and that of other western managers US$ 3,410. The foreign staff were generally highly mobile and were rarely accompanied by their families. Most saw their time in the Czech Republic as positive for their careers but very few had any intention of remaining there. Factors in the local situation which were evaluated positively included market opportunities, the economic and political environment, the quality of technical and managerial staff, and cheap labour and low production costs. In contrast, the level of appropriate business ethics and conduct, the attitude of local and regional authorities, environmental production conditions, the legal environment and financial markets and fiscal policy were rated very low. In the final section of his work Mr. Drbohlav looked at the opinions expressed by the local Czech population in a poll carried out at the beginning of 1997. This confirmed that international labour migration has become visible in this country, with 43% of respondents knowing at least one foreigner employed by a Czech firm in this country. Perception differ according to the region from which the workers come and those from "the West" are preferred to those coming from further east. 49% saw their attitude towards the former as friendly but only 20% felt thus towards the latter. Overall, attitudes towards migrant workers is neutral, although 38% said that such workers should not have the same rights as Czech citizens. Sympathy towards foreign workers tends to increase with education and the standard of living, and the relatively positive attitudes towards foreigners in the South Bohemia region contradicted the frequent belief that a lack of experience of international migration lowers positive perceptions of it.
Resumo:
BACKGROUND: Trauma care is expensive. However, reliable data on the exact lifelong costs incurred by a major trauma patient are lacking. Discussion usually focuses on direct medical costs--underestimating consequential costs resulting from absence from work and permanent disability. METHODS: Direct medical costs and consequential costs of 63 major trauma survivors (ISS >13) at a Swiss trauma center from 1995 to 1996 were assessed 5 years posttrauma. The following cost evaluation methods were used: correction cost method (direct cost of restoring an original state), human capital method (indirect cost of lost productivity), contingent valuation method (human cost as the lost quality of life), and macroeconomic estimates. RESULTS: Mean ISS (Injury Severity Score) was 26.8 +/- 9.5 (mean +/- SD). In all, 22 patients (35%) were disabled, causing discounted average lifelong total costs of USD 1,293,800, compared with 41 patients (65%) who recovered without any disabilities with incurred costs of USD 147,200 (average of both groups USD 547,800). Two thirds of these costs were attributable to a loss of production whereas only one third was a result of the cost of correction. Primary hospital treatment (USD 27,800 +/- 37,800) was only a minor fraction of the total cost--less than the estimated cost of police and the judiciary. Loss of quality of life led to considerable intangible human costs similar to real costs. CONCLUSIONS: Trauma costs are commonly underestimated. Direct medical costs make up only a small part of the total costs. Consequential costs, such as lost productivity, are well in excess of the usual medical costs. Mere cost averages give a false estimate of the costs incurred by patients with/without disabilities.
Resumo:
In her book 'Living on Light', Jasmuheen tries to animate people worldwide to follow her drastic nutrition rules in order to boost their quality of life. Several deaths have been reported as a fatal consequence. A doctor of chemistry who believably claimed to have been 'living on light' for 2 years, except for the daily intake of up to 1.5 l of fluid containing no or almost no calories was interested in a scientific study on this phenomenon. PARTICIPANT AND METHODS: The 54-year-old man was subjected to a rigorous 10-day isolation study with complete absence of nutrition. During the study he obtained an unlimited amount of tea and mineral water but had no caloric intake. Parameters to monitor his metabolic and psychological state and vital parameters were measured regularly and the safety of the individual was ensured throughout the study. The subject agreed on these terms and the study was approved by the local ethics committee.
Resumo:
The feasibility of carbon sequestration in cement kiln dust (CKD) was investigated in a series of batch and column experiments conducted under ambient temperature and pressure conditions. The significance of this work is the demonstration that alkaline wastes, such as CKD, are highly reactive with carbon dioxide (CO2). In the presence of water, CKD can sequester greater than 80% of its theoretical capacity for carbon without any amendments or modifications to the waste. Other mineral carbonation technologies for carbon sequestration rely on the use of mined mineral feedstocks as the source of oxides. The mining, pre-processing and reaction conditions needed to create favorable carbonation kinetics all require significant additions of energy to the system. Therefore, their actual net reduction in CO2 is uncertain. Many suitable alkaline wastes are produced at sites that also generate significant quantities of CO2. While independently, the reduction in CO2 emissions from mineral carbonation in CKD is small (~13% of process related emissions), when this technology is applied to similar wastes of other industries, the collective net reduction in emissions may be significant. The technical investigations presented in this dissertation progress from proof of feasibility through examination of the extent of sequestration in core samples taken from an aged CKD waste pile, to more fundamental batch and microscopy studies which analyze the rates and mechanisms controlling mineral carbonation reactions in a variety of fresh CKD types. Finally, the scale of the system was increased to assess the sequestration efficiency under more pilot or field-scale conditions and to clarify the importance of particle-scale processes under more dynamic (flowing gas) conditions. A comprehensive set of material characterization methods, including thermal analysis, Xray diffraction, and X-ray fluorescence, were used to confirm extents of carbonation and to better elucidate those compositional factors controlling the reactions. The results of these studies show that the rate of carbonation in CKD is controlled by the extent of carbonation. With increased degrees of conversion, particle-scale processes such as intraparticle diffusion and CaCO3 micropore precipitation patterns begin to limit the rate and possibly the extent of the reactions. Rates may also be influenced by the nature of the oxides participating in the reaction, slowing when the free or unbound oxides are consumed and reaction conditions shift towards the consumption of less reactive Ca species. While microscale processes and composition affects appear to be important at later times, the overall degrees of carbonation observed in the wastes were significant (> 80%), a majority of which occurs within the first 2 days of reaction. Under the operational conditions applied in this study, the degree of carbonation in CKD achieved in column-scale systems was comparable to those observed under ideal batch conditions. In addition, the similarity in sequestration performance among several different CKD waste types indicates that, aside from available oxide content, no compositional factors significantly hinder the ability of the waste to sequester CO2.
Resumo:
BACKGROUND: Bleeding is a frequent complication during surgery. The intraoperative administration of blood products, including packed red blood cells, platelets and fresh frozen plasma (FFP), is often live saving. Complications of blood transfusions contribute considerably to perioperative costs and blood product resources are limited. Consequently, strategies to optimize the decision to transfuse are needed. Bleeding during surgery is a dynamic process and may result in major blood loss and coagulopathy due to dilution and consumption. The indication for transfusion should be based on reliable coagulation studies. While hemoglobin levels and platelet counts are available within 15 minutes, standard coagulation studies require one hour. Therefore, the decision to administer FFP has to be made in the absence of any data. Point of care testing of prothrombin time ensures that one major parameter of coagulation is available in the operation theatre within minutes. It is fast, easy to perform, inexpensive and may enable physicians to rationally determine the need for FFP. METHODS/DESIGN: The objective of the POC-OP trial is to determine the effectiveness of point of care prothrombin time testing to reduce the administration of FFP. It is a patient and assessor blind, single center randomized controlled parallel group trial in 220 patients aged between 18 and 90 years undergoing major surgery (any type, except cardiac surgery and liver transplantation) with an estimated blood loss during surgery exceeding 20% of the calculated total blood volume or a requirement of FFP according to the judgment of the physicians in charge. Patients are randomized to usual care plus point of care prothrombin time testing or usual care alone without point of care testing. The primary outcome is the relative risk to receive any FFP perioperatively. The inclusion of 110 patients per group will yield more than 80% power to detect a clinically relevant relative risk of 0.60 to receive FFP of the experimental as compared with the control group. DISCUSSION: Point of care prothrombin time testing in the operation theatre may reduce the administration of FFP considerably, which in turn may decrease costs and complications usually associated with the administration of blood products. TRIAL REGISTRATION: NCT00656396.
Resumo:
The aim of the study was to report on oral, dental and prosthetic conditions as well as therapeutic measures for temporarily institutionalized geriatric patients. The patients were referred to the dentist since dental problems were observed by the physicians or reported by the patients themselves. This resulted in a selection among the geriatric patients; but they are considered to be representative for this segment of patients exhibiting typical signs of undertreatment. The main problem was the poor retention of the prosthesis, which was associated to insufficient masticatory function and poor nutrition status. Forty-seven percent of the patients were edentulous or had maximally two radicular rests out of function. Altogether 70% of the maxillary and 51% of the mandibular jaws exhibited no more teeth. Eighty-nine percent of the patients had a removable denture, and it was observed that maxillary dentures were regularly worn in contrast to mandibular dentures. The partially edentate patients had a mean number of ten teeth, significantly more in the manidublar than maxillary jaw. Treatment consisted mainly in the adaptation and repair of dentures, tooth extractions and fillings. Only few appointments (mostly two) were necessary to improve the dental conditions, resulting in low costs. Patients without dentures or no need for denture repair generated the lowest costs. Slightly more visits were necessary for patients with dementia and musculoskeletal problems. The present findings show that regular maintenance care of institutionalized geriatric patients would limit costs in a long-term perspective, improve the oral situation and reduce the need for invasive treatment.
Resumo:
A combinatorial protocol (CP) is introduced here to interface it with the multiple linear regression (MLR) for variable selection. The efficiency of CP-MLR is primarily based on the restriction of entry of correlated variables to the model development stage. It has been used for the analysis of Selwood et al data set [16], and the obtained models are compared with those reported from GFA [8] and MUSEUM [9] approaches. For this data set CP-MLR could identify three highly independent models (27, 28 and 31) with Q2 value in the range of 0.632-0.518. Also, these models are divergent and unique. Even though, the present study does not share any models with GFA [8], and MUSEUM [9] results, there are several descriptors common to all these studies, including the present one. Also a simulation is carried out on the same data set to explain the model formation in CP-MLR. The results demonstrate that the proposed method should be able to offer solutions to data sets with 50 to 60 descriptors in reasonable time frame. By carefully selecting the inter-parameter correlation cutoff values in CP-MLR one can identify divergent models and handle data sets larger than the present one without involving excessive computer time.
Resumo:
Using stress and coping as a unifying theoretical concept, a series of five models was developed in order to synthesize the survey questions and to classify information. These models identified the question, listed the research study, described measurements, listed workplace data, and listed industry and national reference data.^ A set of 38 instrument questions was developed within the five coping correlate categories. In addition, a set of 22 stress symptoms was also developed. The study was conducted within two groups, police and professors, on a large university campus. The groups were selected because their occupations were diverse, but they were a part of the same macroenvironment. The premise was that police officers would be more highly stressed than professors.^ Of a total study group of 80, there were 37 respondents. The difference in the mean stress responses was observable between the two groups. Not only were the responses similar within each group, but the stress level of response was also similar within each group. While the response to the survey instrument was good, only 3 respondents answered the stress symptom survey properly. It was determined that none of the 37 respondents believed that they were ill. This perception of being well was also evidenced by the grand mean of the stress scores of 2.76 (3.0 = moderate stress). This also caused fewer independent variables to be entered in the multiple regression model.^ The survey instrument was carefully designed to be universal. Universality is the ability to transcend occupational or regional definitions as applied to stress. It is the ability to measure responses within broad categories such as physiological, emotional, behavioral, social, and cognitive functions without losing the ability to measure the detail within the individual questions, or the relationships between questions and categories.^ Replication is much easier to achieve with standardized categories, questions, and measurement procedures such as those developed for the universal survey instrument. Because the survey instrument is universal it can be used as an analytical device, an assessment device, a basic tool for planning and a follow-up instrument to measure individual response to planned reductions in occupational stress. (Abstract shortened with permission of author.) ^