956 resultados para CODON USAGE


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background. Of the over five million annual pediatric visits to U.S. emergency departments, one-third to one-half are for non-emergent conditions. Minorities are more likely to utilize the emergency department (ED) for non-emergent conditions. Very little research has analyzed the role of illness type, perceived need, or family preferences in explaining this disparity. ^ Objectives. This study examined racial-ethnic differences in preferences for care among non-emergent users of the ED. ^ Research design. A random selection of pediatric non-emergent ED users within a single CHIP managed care plan were surveyed regarding attitudes and health care preferences. Preferences for ED utilization were analyzed by racial-ethnic category, controlling for illness type, child and guardian age, education level, language, and perceived need. ^ Results. A total of 250 families were surveyed. Most respondents reported having a regular doctor, satisfaction with their physician, and ready access to their physician. Fifteen percent of White, 39% of Hispanic, and 38% of Black families reported they preferred the emergency department for ill care. In multivariate analysis, Whites families were significantly less likely to prefer the emergency department for ill visits (odds ratio, 0.12; 95% confidence interval 0.03-0.55) compared to Blacks and Hispanics. ^ Conclusions. Racial-ethnic disparities in non-emergent ED utilization may be partially explained by different preferences for care. ^ Key words: children, emergency department, preferences for care, disparities ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Emergency departments (EDs) have been called the net below the safety net due to their long history of providing care to the uninsured and others lacking access to the healthcare system. In past years, those with Medicaid and, more recently, those with Medicare, are also utilizing the ED as a medical home for routine primary care. There are many reasons for this but the costs to the community have become increasingly burdensome. ^ To evaluate how often the ED is being utilized for primary care, we applied a standardized tool, the New York University Algorithm, to over 43,000 ED visits when no hospitalization was required made by Hardin, Jefferson, and Orange County residents over a 12 month period. We compared our results to Harris County, where studies using the same framework have been performed, and found that sizeable segments of the population in both areas are utilizing the ED for non-emergent primary care that could be treated in a more cost-effective community setting. ^ We also analyzed our dataset for visit-specific characteristics. We found evidence of two possible health care disparities: (1) Blacks had a higher rate of primary care-related ED visits in relation to their percentage of the population when compared to other racial/ethnic groups; and (2) when form of payment is considered, the uninsured were more likely to have a primary care-related ED visit than any other group. These findings suggest a lack of community-based primary care services for the medically needy in Southeast Texas. ^ We believe that studies such as this are warranted elsewhere in Texas as well. We plan to present our findings to local policy makers, who should find this information helpful in identifying gaps in the safety net and assist them in better allocating scarce community resources. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose. To evaluate the use of the Legionella Urine Antigen Test as a cost effective method for diagnosing Legionnaires’ disease in five San Antonio Hospitals from January 2007 to December 2009. ^ Methods. The data reported by five San Antonio hospitals to the San Antonio Metropolitan Health District during a 3-year retrospective study (January 2007 to December 2009) were evaluated for the frequency of non-specific pneumonia infections, the number of Legionella Urine Antigen Tests performed, and the percentage of positive cases of Legionnaires’ disease diagnosed by the Legionella Urine Antigen Test.^ Results. There were a total of 7,087 cases of non-specific pneumonias reported across the five San Antonio hospitals studied from 2007 to 2009. A total of 5,371 Legionella Urine Antigen Tests were performed from January, 2007 to December, 2009 across the five San Antonio hospitals in the study. A total of 38 positive cases of Legionnaires’ disease were identified by the use of Legionella Urinary Antigen Test from 2007-2009.^ Conclusions. In spite of the limitations of this study in obtaining sufficient relevant data to evaluate the cost effectiveness of Legionella Urinary Antigen Test in diagnosing Legionnaires’ disease, the Legionella Urinary Antigen Test is simple, accurate, faster, as results can be obtained within minutes to hours; and convenient because it can be performed in emergency room department to any patient who presents with the clinical signs or symptoms of pneumonia. Over the long run, it remains to be shown if this test may decrease mortality, lower total medical costs by decreasing the number of broad-spectrum antibiotics prescribed, shorten patient wait time/hospital stay, and decrease the need for unnecessary ancillary testing, and improve overall patient outcomes.^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Measures have been developed to understand tendencies in the distribution of economic activity. The merits of these measures are in the convenience of data collection and processing. In this interim report, investigating the property of such measures to determine the geographical spread of economic activities, we summarize the merits and limitations of measures, and make clear that we must apply caution in their usage. As a first trial to access areal data, this project focus on administrative areas, not on point data and input-output data. Firm level data is not within the scope of this article. The rest of this article is organized as follows. In Section 2, we touch on the the limitations and problems associated with the measures and areal data. Specific measures are introduced in Section 3, and applied in Section 4. The conclusion summarizes the findings and discusses future work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The preference utilization ratio, i.e., the share of imports under preferential tariff schemes out of total imports, has been a popular indicator for measuring the usage of preferential tariffs vis-à-vis tariffs on a most-favored-nation basis. A crucial shortcoming of this measure is the data requirements, particularly for import value data classified by tariff schemes, which are not available in most countries. This study proposes an alternative measure for preferential tariff utilization, termed the "tariff exemption ratio." This measure offers the unique advantage of needing only publicly available data, such as those provided by the World Development Indicators, for its computations. We can thus calculate this measure for most countries for an international comparison. Our finding is that tariff exemption ratios differ widely across countries, with a global average of approximately 50%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of cloud computing is extending to all kind of systems, including the ones that are part of Critical Infrastructures, and measuring the reliability is becoming more difficult. Computing is becoming the 5th utility, in part thanks to the use of cloud services. Cloud computing is used now by all types of systems and organizations, including critical infrastructure, creating hidden inter-dependencies on both public and private cloud models. This paper investigates the use of cloud computing by critical infrastructure systems, the reliability and continuity of services risks associated with their use by critical systems. Some examples are presented of their use by different critical industries, and even when the use of cloud computing by such systems is not widely extended, there is a future risk that this paper presents. The concepts of macro and micro dependability and the model we introduce are useful for inter-dependency definition and for analyzing the resilience of systems that depend on other systems, specifically in the cloud model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a method for the static resource usage analysis of MiniZinc models. The analysis can infer upper bounds on the usage that a MiniZinc model will make of some resources such as the number of constraints of a given type (equality, disequality, global constraints, etc.), the number of variables (search variables or temporary variables), or the size of the expressions before calling the solver. These bounds are obtained from the models independently of the concrete input data (the instance data) and are in general functions of sizes of such data. In our approach, MiniZinc models are translated into Ciao programs which are then analysed by the CiaoPP system. CiaoPP includes a parametric analysis framework for resource usage in which the user can define resources and express the resource usage of library procedures (and certain program construets) by means of a language of assertions. We present the approach and report on a preliminary implementation, which shows the feasibility of the approach, and provides encouraging results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In an increasing number of applications (e.g., in embedded, real-time, or mobile systems) it is important or even essential to ensure conformance with respect to a specification expressing resource usages, such as execution time, memory, energy, or user-defined resources. In previous work we have presented a novel framework for data size-aware, static resource usage verification. Specifications can include both lower and upper bound resource usage functions. In order to statically check such specifications, both upper- and lower-bound resource usage functions (on input data sizes) approximating the actual resource usage of the program which are automatically inferred and compared against the specification. The outcome of the static checking of assertions can express intervals for the input data sizes such that a given specification can be proved for some intervals but disproved for others. After an overview of the approach in this paper we provide a number of novel contributions: we present a full formalization, and we report on and provide results from an implementation within the Ciao/CiaoPP framework (which provides a general, unified platform for static and run-time verification, as well as unit testing). We also generalize the checking of assertions to allow preconditions expressing intervals within which the input data size of a program is supposed to lie (i.e., intervals for which each assertion is applicable), and we extend the class of resource usage functions that can be checked.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automatic cost analysis of programs has been traditionally concentrated on a reduced number of resources such as execution steps, time, or memory. However, the increasing relevance of analysis applications such as static debugging and/or certiflcation of user-level properties (including for mobile code) makes it interesting to develop analyses for resource notions that are actually application-dependent. This may include, for example, bytes sent or received by an application, number of files left open, number of SMSs sent or received, number of accesses to a datábase, money spent, energy consumption, etc. We present a fully automated analysis for inferring upper bounds on the usage that a Java bytecode program makes of a set of application programmer-deflnable resources. In our context, a resource is defined by programmer-provided annotations which state the basic consumption that certain program elements make of that resource. From these deflnitions our analysis derives functions which return an upper bound on the usage that the whole program (and individual blocks) make of that resource for any given set of input data sizes. The analysis proposed is independent of the particular resource. We also present some experimental results from a prototype implementation of the approach covering a signiflcant set of interesting resources.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automatic cost analysis of programs has been traditionally studied in terms of a number of concrete, predefined resources such as execution steps, time, or memory. However, the increasing relevance of analysis applications such as static debugging and/or certification of user-level properties (including for mobile code) makes it interesting to develop analyses for resource notions that are actually applicationdependent. This may include, for example, bytes sent or received by an application, number of files left open, number of SMSs sent or received, number of accesses to a database, money spent, energy consumption, etc. We present a fully automated analysis for inferring upper bounds on the usage that a Java bytecode program makes of a set of application programmer-definable resources. In our context, a resource is defined by programmer-provided annotations which state the basic consumption that certain program elements make of that resource. From these definitions our analysis derives functions which return an upper bound on the usage that the whole program (and individual blocks) make of that resource for any given set of input data sizes. The analysis proposed is independent of the particular resource. We also present some experimental results from a prototype implementation of the approach covering an ample set of interesting resources.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Applications that operate on meshes are very popular in High Performance Computing (HPC) environments. In the past, many techniques have been developed in order to optimize the memory accesses for these datasets. Different loop transformations and domain decompositions are com- monly used for structured meshes. However, unstructured grids are more challenging. The memory accesses, based on the mesh connectivity, do not map well to the usual lin- ear memory model. This work presents a method to improve the memory performance which is suitable for HPC codes that operate on meshes. We develop a method to adjust the sequence in which the data are used inside the algorithm, by means of traversing and sorting the mesh. This sorted mesh can be transferred sequentially to the lower memory levels and allows for minimum data transfer requirements. The method also reduces the lower memory requirements dra- matically: up to 63% of the L1 cache misses are removed in a traditional cache system. We have obtained speedups of up to 2.58 on memory operations as measured in a general- purpose CPU. An improvement is also observed with se- quential access memories, where we have observed reduc- tions of up to 99% in the required low-level memory size.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cross-platform development frameworks for mobile applications promise important advantages in cost cuttings and easy maintenance, posing as a very good option for organizations interested in the design of mobile applications for several platforms. Given that platform conventions are especially important for the User eXperience (UX) of mobile applications, the usage of framework where the same code defines the behavior of the app in different platforms could have negative impact in the UX. The objetive of this study is comparing the cross-platform and the native approach for being able to determine if the selected development approach has any impact on the users in terms of UX. To be able to set a base line under this subject, study on cross-platform frameworks was performed to select the most appropriate one from a UX point of view. In order to achieve the objectives of this work, two development teams have developed two versions of the same application; one using framework that generates Android and iOS versions automatically, and another team developing native versions of the same application. The alternative versions for each platform have been evaluated with 37 users with a combination of a laboratory usability test and a longitudinal study. The results show that differences are minimal in the Android version, but in iOS, even if a reasonable good UX can be obtained with the usage of this framework by an UX-conscious design team, a higher level of UX can be obtained directly developing in native code.