912 resultados para average complexity
Resumo:
INTRODUCTION: Sequential antibiotic therapy (SAT) is safe and economical. However, the unnecessary use of intravenous (IV) administration usually occurs. The objective of this work was to get to know the effectiveness of an intervention to implement the SAT in a teaching hospital in Brazil. METHODS: This was a prospective and interventional study, historically controlled, and was conducted in the Hospital de Clínicas, Universidade Federal de Uberlândia, State of Minas Gerais, Brazil, a high complexity teaching hospital having 503 beds. In each of the periods, from 04/04/05 to 07/20/05 (pre-intervention) and from 09/24/07 to 12/20/07 (intervention), 117 patients were evaluated. After the pre-intervention period, guidelines were developed which were implemented during the intervention period along with educational measures and a reminder system added to the patients' prescription. RESULTS: In the pre-intervention and intervention periods, the IV antibiotics were used as treatment for a average time of 14.8 and 11.8 days, respectively. Ceftriaxone was the antibiotic most prescribed in both periods (23.4% and 21.6% respectively). Starting from the first prescription of antibiotics, the average length of hospitalization time was 21.8 and 17.5 days, respectively. The SAT occurred only in 4 and 5 courses of treatment, respectively, and 12.8% and 18.8% of the patients died in the respective periods. CONCLUSIONS: Under the presented conditions, the evaluated intervention strategy is ineffective in promoting the exchange of the antibiotic administration from IV to oral treatment (SAT).
Resumo:
RESUMO - A definição e medição da produção são questões centrais para a administração hospitalar. A produção hospitalar, quando se consideram os casos tratados, baseia-se em dois aspectos: a definição de sistemas de classificação de doentes como metodologia para identificar produtos e a criação de índices de casemix para se compararem esses mesmos produtos. Para a sua definição e implementação podem ser consideradas características relacionadas com a complexidade dos casos (atributo da oferta) ou com a sua gravidade (atributo da procura), ou ainda características mistas. Por sua vez, a análise do perfil e da política de admissões dos hospitais adquire um maior relevo no contexto de novas experiências previstas e em curso no SNS e da renovada necessidade de avaliação e regulação que daí decorrem. Neste estudo pretendeu-se discutir a metodologia para apuramento do índice de casemix dos hospitais, introduzindo- se a gravidade dos casos tratados como atributo relevante para a sua concretização. Assim, foi analisada uma amostra de 950 443 casos presentes na base de dados dos resumos de alta em 2002, tendo- -se dado particular atenção aos 31 hospitais posteriormente constituídos como SA. Foram considerados três índices de casemix: índice de complexidade (a partir do peso relativo dos DRGs), índice de gravidade (a partir da escala de mortalidade esperada do disease staging recalibrada para Portugal) e índice conjunto (média dos dois anteriores). Verificou-se que a análise do índice de complexidade, de gravidade e conjunto dá informações distintas sobre o perfil de admissões dos hospitais considerados. Os índices de complexidade e de gravidade mostram associações distintas às características dos hospitais e dos doentes tratados. Para além disso, existe uma diferença clara entre os casos com tratamento médico e cirúrgico. No entanto, para a globalidade dos hospitais analisados observou-se que os hospitais que tratam os casos mais graves tratam igualmente os mais complexos, tendo-se ainda identificado alguns hospitais em que tal não se verifica e, quando possível, apontado eventuais razões para esse comportamento.
Resumo:
Tese de Doutoramento em Contabilidade.
Resumo:
Pressures on the Brazilian Amazon forest have been accentuated by agricultural activities practiced by families encouraged to settle in this region in the 1970s by the colonization program of the government. The aims of this study were to analyze the temporal and spatial evolution of land cover and land use (LCLU) in the lower Tapajós region, in the state of Pará. We contrast 11 watersheds that are generally representative of the colonization dynamics in the region. For this purpose, Landsat satellite images from three different years, 1986, 2001, and 2009, were analyzed with Geographic Information Systems. Individual images were subject to an unsupervised classification using the Maximum Likelihood Classification algorithm available on GRASS. The classes retained for the representation of LCLU in this study were: (1) slightly altered old-growth forest, (2) succession forest, (3) crop land and pasture, and (4) bare soil. The analysis and observation of general trends in eleven watersheds shows that LCLU is changing very rapidly. The average deforestation of old-growth forest in all the watersheds was estimated at more than 30% for the period of 1986 to 2009. The local-scale analysis of watersheds reveals the complexity of LCLU, notably in relation to large changes in the temporal and spatial evolution of watersheds. Proximity to the sprawling city of Itaituba is related to the highest rate of deforestation in two watersheds. The opening of roads such as the Transamazonian highway is associated to the second highest rate of deforestation in three watersheds.
Resumo:
Distributed data aggregation is an important task, allowing the de- centralized determination of meaningful global properties, that can then be used to direct the execution of other applications. The resulting val- ues result from the distributed computation of functions like count, sum and average. Some application examples can found to determine the network size, total storage capacity, average load, majorities and many others. In the last decade, many di erent approaches have been pro- posed, with di erent trade-o s in terms of accuracy, reliability, message and time complexity. Due to the considerable amount and variety of ag- gregation algorithms, it can be di cult and time consuming to determine which techniques will be more appropriate to use in speci c settings, jus- tifying the existence of a survey to aid in this task. This work reviews the state of the art on distributed data aggregation algorithms, providing three main contributions. First, it formally de nes the concept of aggrega- tion, characterizing the di erent types of aggregation functions. Second, it succinctly describes the main aggregation techniques, organizing them in a taxonomy. Finally, it provides some guidelines toward the selection and use of the most relevant techniques, summarizing their principal characteristics.
Resumo:
Aims: To evaluate the differences in linear and complex heart rate dynamics in twin pairs according to fetal sex combination [male-female (MF), male-male (MM), and female-female (FF)]. Methods: Fourteen twin pairs (6 MF, 3 MM, and 5 FF) were monitored between 31 and 36.4 weeks of gestation. Twenty-six fetal heart rate (FHR) recordings of both twins were simultaneously acquired and analyzed with a system for computerized analysis of cardiotocograms. Linear and nonlinear FHR indices were calculated. Results: Overall, MM twins presented higher intrapair average in linear indices than the other pairs, whereas FF twins showed higher sympathetic-vagal balance. MF twins exhibited higher intrapair average in entropy indices and MM twins presented lower entropy values than FF twins considering the (automatically selected) threshold rLu. MM twin pairs showed higher intrapair differences in linear heart rate indices than MF and FF twins, whereas FF twins exhibited lower intrapair differences in entropy indices. Conclusions: The results of this exploratory study suggest that twins have sex-specific differences in linear and nonlinear indices of FHR. MM twins expressed signs of a more active autonomic nervous system and MF twins showed the most active complexity control system. These results suggest that fetal sex combination should be taken into consideration when performing detailed evaluation of the FHR in twins.
Resumo:
FUNDAMENTO: A complexidade da farmacoterapia consiste de múltiplas características do regime prescrito, incluindo o número de diferentes medicações no esquema, o número de unidades de dosagem por dose, o número total de doses por dia e os cuidados na administração dos medicamentos. O Medication Regimen Complexity Index (MRCI) é um instrumento específico, validado e utilizado para medir a complexidade da farmacoterapia, desenvolvido originalmente em língua inglesa. OBJETIVO: Tradução transcultural e validação desse instrumento para o português do Brasil. MÉTODOS: Foi desenvolvido um estudo transversal envolvendo 95 pacientes com diabete do tipo 2 utilizando múltiplas medicações. O processo de validação teve início pela tradução, retrotradução e pré-teste do instrumento, gerando uma versão adaptada chamada Índice de Complexidade da Farmacoterapia (ICFT). Em seguida foram analisados parâmetros psicométricos, incluindo validade convergente, validade divergente, confiabilidade entre avaliadores e teste-reteste. RESULTADOS: A complexidade da farmacoterapia medida pelo ICFT obteve média de 15,7 pontos (desvio padrão = 8,36). O ICFT mostrou correlação significativa com o número de medicamentos em uso (r = 0,86; p < 0,001) e a idade dos pacientes (r = 0,28; p = 0,005). A confiabilidade entre avaliadores obteve correlação intraclasse igual a 0,99 (p < 0,001) e a confiabilidade teste-reteste obteve correlação de 0,997 (p < 0,001). CONCLUSÃO: Os resultados demonstraram que o ICFT apresenta bom desempenho de validade e confiabilidade, podendo ser utilizado como ferramenta útil na prática clínica e em pesquisas envolvendo análise da complexidade da terapia.
Resumo:
We say the endomorphism problem is solvable for an element W in a free group F if it can be decided effectively whether, given U in F, there is an endomorphism Φ of F sending W to U. This work analyzes an approach due to C. Edmunds and improved by C. Sims. Here we prove that the approach provides an efficient algorithm for solving the endomorphism problem when W is a two- generator word. We show that when W is a two-generator word this algorithm solves the problem in time polynomial in the length of U. This result gives a polynomial-time algorithm for solving, in free groups, two-variable equations in which all the variables occur on one side of the equality and all the constants on the other side.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
Ever since the appearance of the ARCH model [Engle(1982a)], an impressive array of variance specifications belonging to the same class of models has emerged [i.e. Bollerslev's (1986) GARCH; Nelson's (1990) EGARCH]. This recent domain has achieved very successful developments. Nevertheless, several empirical studies seem to show that the performance of such models is not always appropriate [Boulier(1992)]. In this paper we propose a new specification: the Quadratic Moving Average Conditional heteroskedasticity model. Its statistical properties, such as the kurtosis and the symmetry, as well as two estimators (Method of Moments and Maximum Likelihood) are studied. Two statistical tests are presented, the first one tests for homoskedasticity and the second one, discriminates between ARCH and QMACH specification. A Monte Carlo study is presented in order to illustrate some of the theoretical results. An empirical study is undertaken for the DM-US exchange rate.
Resumo:
The Whitehead minimization problem consists in finding a minimum size element in the automorphic orbit of a word, a cyclic word or a finitely generated subgroup in a finite rank free group. We give the first fully polynomial algorithm to solve this problem, that is, an algorithm that is polynomial both in the length of the input word and in the rank of the free group. Earlier algorithms had an exponential dependency in the rank of the free group. It follows that the primitivity problem – to decide whether a word is an element of some basis of the free group – and the free factor problem can also be solved in polynomial time.
Resumo:
Neuroblastoma (NB) is a neural crest-derived childhood tumor characterized by a remarkable phenotypic diversity, ranging from spontaneous regression to fatal metastatic disease. Although the cancer stem cell (CSC) model provides a trail to characterize the cells responsible for tumor onset, the NB tumor-initiating cell (TIC) has not been identified. In this study, the relevance of the CSC model in NB was investigated by taking advantage of typical functional stem cell characteristics. A predictive association was established between self-renewal, as assessed by serial sphere formation, and clinical aggressiveness in primary tumors. Moreover, cell subsets gradually selected during serial sphere culture harbored increased in vivo tumorigenicity, only highlighted in an orthotopic microenvironment. A microarray time course analysis of serial spheres passages from metastatic cells allowed us to specifically "profile" the NB stem cell-like phenotype and to identify CD133, ABC transporter, and WNT and NOTCH genes as spheres markers. On the basis of combined sphere markers expression, at least two distinct tumorigenic cell subpopulations were identified, also shown to preexist in primary NB. However, sphere markers-mediated cell sorting of parental tumor failed to recapitulate the TIC phenotype in the orthotopic model, highlighting the complexity of the CSC model. Our data support the NB stem-like cells as a dynamic and heterogeneous cell population strongly dependent on microenvironmental signals and add novel candidate genes as potential therapeutic targets in the control of high-risk NB.
Resumo:
I develop a model of endogenous bounded rationality due to search costs, arising implicitly from the problems complexity. The decision maker is not required to know the entire structure of the problem when making choices but can think ahead, through costly search, to reveal more of it. However, the costs of search are not assumed exogenously; they are inferred from revealed preferences through her choices. Thus, bounded rationality and its extent emerge endogenously: as problems become simpler or as the benefits of deeper search become larger relative to its costs, the choices more closely resemble those of a rational agent. For a fixed decision problem, the costs of search will vary across agents. For a given decision maker, they will vary across problems. The model explains, therefore, why the disparity, between observed choices and those prescribed under rationality, varies across agents and problems. It also suggests, under reasonable assumptions, an identifying prediction: a relation between the benefits of deeper search and the depth of the search. As long as calibration of the search costs is possible, this can be tested on any agent-problem pair. My approach provides a common framework for depicting the underlying limitations that force departures from rationality in different and unrelated decision-making situations. Specifically, I show that it is consistent with violations of timing independence in temporal framing problems, dynamic inconsistency and diversification bias in sequential versus simultaneous choice problems, and with plausible but contrasting risk attitudes across small- and large-stakes gambles.