992 resultados para floating point unit


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have discovered using Pan-STARRS1 an extremely red late-L dwarf, which has (J - K)(MKO) = 2.78 and (J - K) (2MASS) = 2.84, making it the reddest known field dwarf and second only to 2MASS J1207-39b among substellar companions. Near-IR spectroscopy shows a spectral type of L7 +/- 1 and reveals a triangular H-band continuum and weak alkali (K I and Na I) lines, hallmarks of low surface gravity. Near-IR astrometry from the Hawaii Infrared Parallax Program gives a distance of 24.6 +/- 1.4 pc and indicates a much fainter J-band absolute magnitude than field L dwarfs. The position and kinematics of PSO J318.5-22 point to membership in the beta Pic moving group. Evolutionary models give a temperature of 1160(-40)(+30) K and a mass of 6.5(-1.0)(+1.3) M-Jup, making PSO J318.5-22 one of the lowest mass free-floating objects in the solar neighborhood. This object adds to the growing list of low-gravity field L dwarfs and is the first to be strongly deficient in methane relative to its estimated temperature. Comparing their spectra suggests that young L dwarfs with similar ages and temperatures can have different spectral signatures of youth. For the two objects with well constrained ages (PSO J318.5-22 and 2MASS J0355+11), we find their temperatures are approximate to 400 K cooler than field objects of similar spectral type but their luminosities are similar, i.e., these young L dwarfs are very red and unusually cool but not "underluminous." Altogether, PSO J318.5-22 is the first free-floating object with the colors, magnitudes, spectrum, luminosity, and mass that overlap the young dusty planets around HR 8799 and 2MASS J1207-39

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rotator cuff lesions are common and the incidence increases with age. After tendon rupture of the rotator cuff, the muscle-tendon unit retracts, which is accompanied by muscle fatty infiltration, atrophy, and interstitial fibrosis of the musculature, thus, fundamentally changing the muscle architecture. These changes are important prognostic factors for the operative rotator cuff reconstruction outcome. Selection of the correct time point for reconstruction as well as the optimal mechanical fixation technique are decisive for successful attachment at the tendon-to-bone insertion site. Thus, knowledge of the pathophysiological processes plays an important role. The goal of this article is to establish a relationship between currently existing evidence with respect to the preoperatively existing changes of the muscle-tendon unit and the choice of the time for the operation and the operative technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cytoplasmic dynein performs multiple cellular tasks but its regulation remains unclear. The dynein heavy chain has a N-terminal stem that binds to other subunits and a C-terminal motor unit that contains six AAA (ATPase associated with cellular activities) domains and a microtubule-binding site located between AAA4 and AAA5. In Aspergillus nidulans, NUDF (a LIS1 homolog) functions in the dynein pathway, and two nudF6 partial suppressors were mapped to the nudA dynein heavy chain locus. Here we identified these two mutations. The nudAL1098F mutation resides in the stem region, and nudAR3086C is in the end of AAA4. These mutations partially suppress the phenotype of nudF deletion but do not suppress the phenotype exhibited by mutants of dynein intermediate chain and Arp1. Surprisingly, the stronger DeltanudF suppressor, nudAR3086C, causes an obvious decrease in the basal level of dynein's ATPase activity and an increase in dynein's distribution along microtubules. Thus, suppression of the DeltanudF phenotype may result from mechanisms other than simply the enhancement of dynein's ATPase activity. The fact that a mutation in the end of AAA4 negatively regulates dynein's ATPase activity but partially compensates for NUDF loss indicates the importance of the AAA4 domain in dynein regulation in vivo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is no accepted way of measuring prothrombin time without time loss for patients undergoing major surgery who are at risk of intraoperative dilution and consumption coagulopathy due to bleeding and volume replacement with crystalloids or colloids. Decisions to transfuse fresh frozen plasma and procoagulatory drugs have to rely on clinical judgment in these situations. Point-of-care devices are considerably faster than the standard laboratory methods. In this study we assessed the accuracy of a Point-of-care (PoC) device measuring prothrombin time compared to the standard laboratory method. Patients undergoing major surgery and intensive care unit patients were included. PoC prothrombin time was measured by CoaguChek XS Plus (Roche Diagnostics, Switzerland). PoC and reference tests were performed independently and interpreted under blinded conditions. Using a cut-off prothrombin time of 50%, we calculated diagnostic accuracy measures, plotted a receiver operating characteristic (ROC) curve and tested for equivalence between the two methods. PoC sensitivity and specificity were 95% (95% CI 77%, 100%) and 95% (95% CI 91%, 98%) respectively. The negative likelihood ratio was 0.05 (95% CI 0.01, 0.32). The positive likelihood ratio was 19.57 (95% CI 10.62, 36.06). The area under the ROC curve was 0.988. Equivalence between the two methods was confirmed. CoaguChek XS Plus is a rapid and highly accurate test compared with the reference test. These findings suggest that PoC testing will be useful for monitoring intraoperative prothrombin time when coagulopathy is suspected. It could lead to a more rational use of expensive and limited blood bank resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intermittent and continuous renal replacement therapies (RRTs) are available for the treatment of acute renal failure (ARF) in the intensive care unit (ICU). Although at present there are no adequately powered survival studies, available data suggest that both methods are equal with respect to patient outcome. Therefore, cost comparison between techniques is important for selecting the modality. Expenditures were prospectively assessed as a secondary end point during a controlled, randomized trial comparing intermittent hemodialysis (IHD) with continuous venovenous hemodiafiltration (CVVHDF). The outcome of the primary end points of this trial, that is, ICU and in-hospital mortality, has been previously published. One hundred twenty-five patients from a Swiss university hospital ICU were randomized either to CVVHDF or IHD. Out of these, 42 (CVVHDF) and 34 (IHD) were available for cost analysis. Patients' characteristics, delivered dialysis dose, duration of stay in the ICU or hospital, mortality rates, and recovery of renal function were not different between the two groups. Detailed 24-h time and material consumption protocols were available for 369 (CVVHDF) and 195 (IHD) treatment days. The mean daily duration of CVVHDF was 19.5 +/- 3.2 h/day, resulting in total expenditures of Euro 436 +/- 21 (21% for human resources and 79% for technical devices). For IHD (mean 3.0 +/- 0.4 h/treatment), the costs were lower (Euro 268 +/- 26), with a larger proportion for human resources (45%). Nursing time spent for CVVHDF was 113 +/- 50 min, and 198 +/- 63 min per IHD treatment. Total costs for RRT in ICU patients with ARF were lower when treated with IHD than with CVVHDF, and have to be taken into account for the selection of the method of RRT in ARF on the ICU.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Monitoring of HIV viral load in patients on combination antiretroviral therapy (ART) is not generally available in resource-limited settings. We examined the cost-effectiveness of qualitative point-of-care viral load tests (POC-VL) in sub-Saharan Africa. DESIGN Mathematical model based on longitudinal data from the Gugulethu and Khayelitsha township ART programmes in Cape Town, South Africa. METHODS Cohorts of patients on ART monitored by POC-VL, CD4 cell count or clinically were simulated. Scenario A considered the more accurate detection of treatment failure with POC-VL only, and scenario B also considered the effect on HIV transmission. Scenario C further assumed that the risk of virologic failure is halved with POC-VL due to improved adherence. We estimated the change in costs per quality-adjusted life-year gained (incremental cost-effectiveness ratios, ICERs) of POC-VL compared with CD4 and clinical monitoring. RESULTS POC-VL tests with detection limits less than 1000 copies/ml increased costs due to unnecessary switches to second-line ART, without improving survival. Assuming POC-VL unit costs between US$5 and US$20 and detection limits between 1000 and 10,000 copies/ml, the ICER of POC-VL was US$4010-US$9230 compared with clinical and US$5960-US$25540 compared with CD4 cell count monitoring. In Scenario B, the corresponding ICERs were US$2450-US$5830 and US$2230-US$10380. In Scenario C, the ICER ranged between US$960 and US$2500 compared with clinical monitoring and between cost-saving and US$2460 compared with CD4 monitoring. CONCLUSION The cost-effectiveness of POC-VL for monitoring ART is improved by a higher detection limit, by taking the reduction in new HIV infections into account and assuming that failure of first-line ART is reduced due to targeted adherence counselling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A characterization is provided for the von Mises–Fisher random variable, in terms of first exit point from the unit hypersphere of the drifted Wiener process. Laplace transform formulae for the first exit time from the unit hypersphere of the drifted Wiener process are provided. Post representations in terms of Bell polynomials are provided for the densities of the first exit times from the circle and from the sphere.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A patient classification system was developed integrating a patient acuity instrument with a computerized nursing distribution method based on a linear programming model. The system was designed for real-time measurement of patient acuity (workload) and allocation of nursing personnel to optimize the utilization of resources.^ The acuity instrument was a prototype tool with eight categories of patients defined by patient severity and nursing intensity parameters. From this tool, the demand for nursing care was defined in patient points with one point equal to one hour of RN time. Validity and reliability of the instrument was determined as follows: (1) Content validity by a panel of expert nurses; (2) predictive validity through a paired t-test analysis of preshift and postshift categorization of patients; (3) initial reliability by a one month pilot of the instrument in a practice setting; and (4) interrater reliability by the Kappa statistic.^ The nursing distribution system was a linear programming model using a branch and bound technique for obtaining integer solutions. The objective function was to minimize the total number of nursing personnel used by optimally assigning the staff to meet the acuity needs of the units. A penalty weight was used as a coefficient of the objective function variables to define priorities for allocation of staff.^ The demand constraints were requirements to meet the total acuity points needed for each unit and to have a minimum number of RNs on each unit. Supply constraints were: (1) total availability of each type of staff and the value of that staff member (value was determined relative to that type of staff's ability to perform the job function of an RN (i.e., value for eight hours RN = 8 points, LVN = 6 points); (2) number of personnel available for floating between units.^ The capability of the model to assign staff quantitatively and qualitatively equal to the manual method was established by a thirty day comparison. Sensitivity testing demonstrated appropriate adjustment of the optimal solution to changes in penalty coefficients in the objective function and to acuity totals in the demand constraints.^ Further investigation of the model documented: correct adjustment of assignments in response to staff value changes; and cost minimization by an addition of a dollar coefficient to the objective function. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The European standard for gillnetsampling to characterize lake fish communities stratifies sampling effort (i.e., number of nets) within depth strata. Nets to sample benthic habitats are randomly distributed throughout the lake within each depth strata. Pelagic nets are also stratified by depth, but are set only at the deepest point of the lake. Multiple authors have suggested that this design under-represents pelagic habitats, resulting in estimates of whole-lake CPUE and community composition which are disproportionately influenced by ecological conditions of littoral and benthic habitats. To address this issue, researchers have proposed estimating whole-lake CPUE by weighting the catch rate in each depth-compartment by the proportion of the volume of the lake contributed by the compartment. Our study aimed to assess the effectiveness of volume-weighting by applying it to fish communities sampled according to the European standard (CEN), and by a second whole-lake gillnetting protocol (VERT), which prescribes additional fishing effort in pelagic habitats. We assume that convergence between the protocols indicates that volume-weighting provides a more accurate estimate of whole-lake catch rate and community composition. Our results indicate that volume-weighting improves agreement between the protocols for whole-lake total CPUE, estimated proportion of perch and roach and the overall fish community composition. Discrepancies between the protocols remaining after volume-weighting maybe because sampling under the CEN protocol overlooks horizontal variation in pelagic fish communities. Analyses based on multiple pelagic-set VERT nets identified gradients in the density and biomass of pelagic fish communities in almost half the lakes that corresponded with the depth of water at net-setting location and distance along the length of a lake. Additional CEN pelagic sampling effort allocated across water depths and distributed throughout the lake would therefore help to reconcile differences between the sampling protocols and, in combination with volume-weighting, converge on a more accurate estimate of whole-lake fish communities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ultrastructural analysis of the polydnavirus of the braconid wasp Chelonus inanitus revealed that virions consist of one cylindrical nucleocapsid enveloped by a single unit membrane. Nucleocapsids have a constant diameter of 33.7 +/- 1.4 nm and a variable length of between 8 and 46 nm. Spreading of viral DNA showed that the genome consists of circular dsDNA molecules of variable sizes and measurement of the contour lengths indicated sizes of between 7 and 31 kbp. When virions were exposed to osmotic shock conditions to release the DNA, only one circular molecule was released per particle suggesting that the various DNA molecules are singly encapsidated in this bracovirus. The viral genome was seen to consist of at least 10 different segments and the aggregate genome size is in the order of 200 kbp. By partial digestion of viral DNA with HindIII or EcoRI in the presence of ethidium bromide and subsequent ligation with HindIII-cut pSP65 or EcoRI-cut pSP64 and transfection into Escherichia coli, libraries of 103 HindIII and 23 EcoRI clones were obtained. Southern blots revealed that complete and unrearranged segments were cloned with this approach, and restriction maps for five segments were obtained. Part of a 16.8 kbp segment was sequenced, found to be AT-rich (73%) and to contain six copies of a 17 bp repeated sequence. The development of the female reproductive tract in the course of pupal-adult development of the wasp was investigated and seen to be strictly correlated with the pigmentation pattern. By the use of a semiquantitative PCR, replication of viral DNA was observed to initiate at a specific stage of pupal-adult development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The nail unit is the largest and a rather complex skin appendage. It is located on the dorsal aspect of the tips of fingers and toes and has important protective and sensory functions. Development begins in utero between weeks 7 and 8 and is fully formed at birth. For its correct development, a great number of signals are necessary. Anatomically, it consists of 4 epithelial components: the matrix that forms the nail plate; the nail bed that firmly attaches the plate to the distal phalanx; the hyponychium that forms a natural barrier at the physiological point of separation of the nail from the bed; and the eponychium that represents the undersurface of the proximal nail fold which is responsible for the formation of the cuticle. The connective tissue components of the matrix and nail bed dermis are located between the corresponding epithelia and the bone of the distal phalanx. Characteristics of the connective tissue include: a morphogenetic potency for the regeneration of their epithelia; the lateral and proximal nail folds form a distally open frame for the growing nail; and the tip of the digit has rich sensible and sensory innervation. The blood supply is provided by the paired volar and dorsal digital arteries. Veins and lymphatic vessels are less well defined. The microscopic anatomy varies from nail subregion to subregion. Several different biopsy techniques are available for the histopathological evaluation of nail alterations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We apply the efficient unit-roots tests of Elliott, Rothenberg, and Stock (1996), and Elliott (1998) to twenty-one real exchange rates using monthly data of the G-7 countries from the post-Bretton Woods floating exchange rate period. Our results indicate that, for eighteen out of the twenty-one real exchange rates, the null hypothesis of a unit root can be rejected at the 10% significance level or better using the Elliot et al (1996) DF-GLS test. The unit-root null hypothesis is also rejected for one additional real exchange rate when we allow for one endogenously determined break in the time series of the real exchange rate as in Perron (1997). In all, we find favorable evidence to support long-run purchasing power parity in nineteen out of twenty-one real exchange rates. Second, we find no strong evidence to suggest that the use of non-U.S. dollar-based real exchange rates tend to produce more favorable result for long-run PPP than the use of U.S. dollar-based real exchange rates as Lothian (1998) has concluded.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Data Envelopment Analysis (DEA) efficiency score obtained for an individual firm is a point estimate without any confidence interval around it. In recent years, researchers have resorted to bootstrapping in order to generate empirical distributions of efficiency scores. This procedure assumes that all firms have the same probability of getting an efficiency score from any specified interval within the [0,1] range. We propose a bootstrap procedure that empirically generates the conditional distribution of efficiency for each individual firm given systematic factors that influence its efficiency. Instead of resampling directly from the pooled DEA scores, we first regress these scores on a set of explanatory variables not included at the DEA stage and bootstrap the residuals from this regression. These pseudo-efficiency scores incorporate the systematic effects of unit-specific factors along with the contribution of the randomly drawn residual. Data from the U.S. airline industry are utilized in an empirical application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Poor communication among health care providers is cited as the most common cause of sentinel events involving patients. Sign-out of patient data at the change of clinician shifts is a component of communication that is especially vulnerable to errors. Sign-outs are particularly extensive and complex in intensive care units (ICUs). There is a paucity of validated tools to assess ICU sign-outs. ^ Objective: To design a valid and reliable survey tool to assess the perceptions of Pediatric ICU (PICU) clinicians about sign-out. ^ Design: Cross-sectional, web-based survey ^ Setting: Academic hospital, 31-bed PICU ^ Subjects: Attending faculty, fellows, nurse practitioners and physician assistants. ^ Interventions: A survey was designed with input from a focus group and administered to PICU clinicians. Test-retest reliability, internal consistency and validity of the survey tool were assessed. ^ Measurements and Main Results: Forty-eight PICU clinicians agreed to participate. We had 42(88%) and 40(83%) responses in the test and retest phases. The mean scores for the ten survey items ranged from 2.79 to 3.67 on a five point Likert scale with no significant test-retest difference and a Pearson correlation between pre and post answers of 0.65. The survey item scores showed internal consistency with a Cronbach's Alpha of 0.85. Exploratory factor analysis revealed three constructs: efficacy of sign-out process, recipient satisfaction and content applicability. Seventy eight % clinicians affirmed the need for improvement of the sign-out process and 83% confirmed the need for face- to-face verbal sign-out. A system-based sign-out format was favored by fellows and advanced level practitioners while attendings preferred a problem-based format (p=0.003). ^ Conclusions: We developed a valid and reliable survey to assess clinician perceptions about the ICU sign-out process. These results can be used to design a verbal template to improve and standardize the sign-out process.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.