871 resultados para Power of attorney


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this research was to determine if principles from organizational theory could be used as a framework to compare and contrast safety interventions developed by for-profit industry for the time period 1986–1996. A literature search of electronic databases and manual search of journals and local university libraries' book stacks was conducted for safety interventions developed by for-profit businesses. To maintain a constant regulatory environment, the business sectors of nuclear power, aviation and non-profits were excluded. Safety intervention evaluations were screened for scientific merit. Leavitt's model from organization theory was updated to include safety climate and renamed the Updated Leavitt's Model. In all, 8000 safety citations were retrieved, 525 met the inclusion criteria, 255 met the organizational safety intervention criteria, and 50 met the scientific merit criteria. Most came from non-public health journals. These 50 were categorized by the Updated Leavitt's Model according to where within the organizational structure the intervention took place. Evidence tables were constructed for descriptive comparison. The interventions clustered in the areas of social structure, safety climate, the interaction between social structure and participants, and the interaction between technology and participants. No interventions were found in the interactions between social structure and technology, goals and technology, or participants and goals. Despite the scientific merit criteria, many still had significant study design weaknesses. Five interventions tested for statistical significance but none of the interventions commented on the power of their study. Empiric studies based on safety climate theorems had the most rigorous designs. There was an attempt in these studies to address randomization amongst subjects to avoid bias. This work highlights the utility of using the Updated Leavitt's Model, a model from organizational theory, as a framework when comparing safety interventions. This work also highlights the need for better study design of future trials of safety interventions. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Usual food choices during the past year, self-reported changes in consumption of three important food groups, and weight changes or stability were the questions addressed in this cross-sectional survey and retrospective review. The subjects were 141 patients with Hodgkin's disease or other B-cell types of lymphoma within their first three years following completion of initial treatments for lymphoma at the University of Texas M. D. Anderson Cancer Center in Houston, Texas. ^ The previously validated Block-98 Food Frequency Questionnaire was used to estimate usual food choices during the past year. Supplementary questions asked about changes breads and cereals (white or whole grain) and relative amounts of fruits and vegetables compared with before diagnosis and treatment. Over half of the subjects reported consuming more whole grains, fruits, and/or vegetables and almost three quarters of those not reporting such changes had been consuming whole grains before diagnosis and treatment. ^ Various dietary patterns were defined in order to learn whether proportionately more patients who changed in healthy directions fulfilled recognized nutritional guidelines such as 5-A-day fruits and vegetables and Dietary Reference Intakes (DRIB) for selected nutrients. ^ Small sizes of dietary pattern sub-groups limited the power of this study to detect differences in meeting recommended dietary guidelines. Nevertheless, insufficient and excessive intakes were detected among individuals with respect to fruits and vegetables, fats, calcium, selenium, iron, folate, and Vitamin A. The prevalence of inadequate or excess intakes of foods or nutrients even among those who perceived that they had increased or continued to eat whole grains and/or fruits and vegetables is of concern because of recognized effects upon general health and potential cancer related effects. ^ Over half of the subjects were overweight or obese (by BMI category) on their first visit to this cancer center and that proportion increased to almost three-quarters by their last follow-up visits. Men were significantly heavier than women, but no other significant differences in BMI measures were found even after accounting for prescribed steroids and dietary patterns. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background. EAP programs for airline pilots in companies with a well developed recovery management program are known to reduce pilot absenteeism following treatment. Given the costs and safety consequences to society, it is important to identify pilots who may be experiencing an AOD disorder to get them into treatment. ^ Hypotheses. This study investigated the predictive power of workplace absenteeism in identifying alcohol or drug disorders (AOD). The first hypothesis was that higher absenteeism in a 12-month period is associated with higher risk that an employee is experiencing AOD. The second hypothesis was that AOD treatment would reduce subsequent absence rates and the costs of replacing pilots on missed flights. ^ Methods. A case control design using eight years (time period) of monthly archival absence data (53,000 pay records) was conducted with a sample of (N = 76) employees having an AOD diagnosis (cases) matched 1:4 with (N = 304) non-diagnosed employees (controls) of the same profession and company (male commercial airline pilots). Cases and controls were matched on the variables age, rank and date of hire. Absence rate was defined as sick time hours used over the sum of the minimum guarantee pay hours annualized using the months the pilot worked for the year. Conditional logistic regression was used to determine if absence predicts employees experiencing an AOD disorder, starting 3 years prior to the cases receiving the AOD diagnosis. A repeated measures ANOVA, t tests and rate ratios (with 95% confidence intervals) were conducted to determine differences between cases and controls in absence usage for 3 years pre and 5 years post treatment. Mean replacement costs were calculated for sick leave usage 3 years pre and 5 years post treatment to estimate the cost of sick leave from the perspective of the company. ^ Results. Sick leave, as measured by absence rate, predicted the risk of being diagnosed with an AOD disorder (OR 1.10, 95% CI = 1.06, 1.15) during the 12 months prior to receiving the diagnosis. Mean absence rates for diagnosed employees increased over the three years before treatment, particularly in the year before treatment, whereas the controls’ did not (three years, x = 6.80 vs. 5.52; two years, x = 7.81 vs. 6.30, and one year, x = 11.00cases vs. 5.51controls. In the first year post treatment compared to the year prior to treatment, rate ratios indicated a significant (60%) post treatment reduction in absence rates (OR = 0.40, CI = 0.28, 0.57). Absence rates for cases remained lower than controls for the first three years after completion of treatment. Upon discharge from the FAA and company’s three year AOD monitoring program, case’s absence rates increased slightly during the fourth year (controls, x = 0.09, SD = 0.14, cases, x = 0.12, SD = 0.21). However, the following year, their mean absence rates were again below those of the controls (controls, x = 0.08, SD = 0.12, cases, x¯ = 0.06, SD = 0.07). Significant reductions in costs associated with replacing pilots calling in sick, were found to be 60% less, between the year of diagnosis for the cases and the first year after returning to work. A reduction in replacement costs continued over the next two years for the treated employees. ^ Conclusions. This research demonstrates the potential for workplace absences as an active organizational surveillance mechanism to assist managers and supervisors in identifying employees who may be experiencing or at risk of experiencing an alcohol/drug disorder. Currently, many workplaces use only performance problems and ignore the employee’s absence record. A referral to an EAP or alcohol/drug evaluation based on the employee’s absence/sick leave record as incorporated into company policy can provide another useful indicator that may also carry less stigma, thus reducing barriers to seeking help. This research also confirms two conclusions heretofore based only on cross-sectional studies: (1) higher absence rates are associated with employees experiencing an AOD disorder; (2) treatment is associated with lower costs for replacing absent pilots. Due to the uniqueness of the employee population studied (commercial airline pilots) and the organizational documentation of absence, the generalizability of this study to other professions and occupations should be considered limited. ^ Transition to Practice. The odds ratios for the relationship between absence rates and an AOD diagnosis are precise; the OR for year of diagnosis indicates the likelihood of being diagnosed increases 10% for every hour change in sick leave taken. In practice, however, a pilot uses approximately 20 hours of sick leave for one trip, because the replacement will have to be paid the guaranteed minimum of 20 hour. Thus, the rate based on hourly changes is precise but not practical. ^ To provide the organization with practical recommendations the yearly mean absence rates were used. A pilot flies on average, 90 hours a month, 1080 annually. Cases used almost twice the mean rate of sick time the year prior to diagnosis (T-1) compared to controls (cases, x = .11, controls, x = .06). Cases are expected to use on average 119 hours annually (total annual hours*mean annual absence rate), while controls will use 60 hours. The cases’ 60 hours could translate to 3 trips of 20 hours each. Management could use a standard of 80 hours or more of sick time claimed in a year as the threshold for unacceptable absence, a 25% increase over the controls (a cost to the company of approximately of $4000). At the 80-hour mark, the Chief Pilot would be able to call the pilot in for a routine check as to the nature of the pilot’s excessive absence. This management action would be based on a company standard, rather than a behavioral or performance issue. Using absence data in this fashion would make it an active surveillance mechanism. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this study was to assess the impact of the Arkansas Long-Term Care Demonstration Project upon Arkansas' Medicaid expenditures and upon the clients it serves. A Retrospective Medicaid expenditure study component used analyses of variance techniques to test for the Project's effects upon aggregated expenditures for 28 demonstration and control counties representing 25 percent of the State's population over four years, 1979-1982.^ A second approach to the study question utilized a 1982 prospective sample of 458 demonstration and control clients from the same 28 counties. The disability level or need for care of each patient was established a priori. The extent to which an individual's variation in Medicaid utilization and costs was explained by patient need, presence or absence of the channeling project's placement decision or some other patient characteristic was examined by multiple regression analysis. Long-term and acute care Medicaid, Medicare, third party, self-pay and the grand total of all Medicaid claims were analyzed for project effects and explanatory relationships.^ The main project effect was to increase personal care costs without reducing nursing home or acute care costs (Prospective Study). Expansion of clients appeared to occur in personal care (Prospective Study) and minimum care nursing home (Retrospective Study) for the project areas. Cost-shifting between Medicaid and Medicare in the project areas and two different patterns of utilization in the North and South projects tended to offset each other such that no differences in total costs between the project areas and demonstration areas occurred. The project was significant ((beta) = .22, p < .001) only for personal care costs. The explanatory power of this personal care regression model (R('2) = .36) was comparable to other reported health services utilization models. Other variables (Medicare buy-in, level of disability, Social Security Supplemental Income (SSI), net monthly income, North/South areas and age) explained more variation in the other twelve cost regression models. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Genome-wide association studies (GWAS) have rapidly become a standard method for disease gene discovery. Many recent GWAS indicate that for most disorders, only a few common variants are implicated and the associated SNPs explain only a small fraction of the genetic risk. The current study incorporated gene network information into gene-based analysis of GWAS data for Crohn's disease (CD). The purpose was to develop statistical models to boost the power of identifying disease-associated genes and gene subnetworks by maximizing the use of existing biological knowledge from multiple sources. The results revealed that Markov random field (MRF) based mixture model incorporating direct neighborhood information from a single gene network is not efficient in identifying CD-related genes based on the GWAS data. The incorporation of solely direct neighborhood information might lead to the low efficiency of these models. Alternative MRF models looking beyond direct neighboring information are necessary to be developed in the future for the purpose of this study.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The genomic era brought by recent advances in the next-generation sequencing technology makes the genome-wide scans of natural selection a reality. Currently, almost all the statistical tests and analytical methods for identifying genes under selection was performed on the individual gene basis. Although these methods have the power of identifying gene subject to strong selection, they have limited power in discovering genes targeted by moderate or weak selection forces, which are crucial for understanding the molecular mechanisms of complex phenotypes and diseases. Recent availability and rapid completeness of many gene network and protein-protein interaction databases accompanying the genomic era open the avenues of exploring the possibility of enhancing the power of discovering genes under natural selection. The aim of the thesis is to explore and develop normal mixture model based methods for leveraging gene network information to enhance the power of natural selection target gene discovery. The results show that the developed statistical method, which combines the posterior log odds of the standard normal mixture model and the Guilt-By-Association score of the gene network in a naïve Bayes framework, has the power to discover moderate/weak selection gene which bridges the genes under strong selection and it helps our understanding the biology under complex diseases and related natural selection phenotypes.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The problem of analyzing data with updated measurements in the time-dependent proportional hazards model arises frequently in practice. One available option is to reduce the number of intervals (or updated measurements) to be included in the Cox regression model. We empirically investigated the bias of the estimator of the time-dependent covariate while varying the effect of failure rate, sample size, true values of the parameters and the number of intervals. We also evaluated how often a time-dependent covariate needs to be collected and assessed the effect of sample size and failure rate on the power of testing a time-dependent effect.^ A time-dependent proportional hazards model with two binary covariates was considered. The time axis was partitioned into k intervals. The baseline hazard was assumed to be 1 so that the failure times were exponentially distributed in the ith interval. A type II censoring model was adopted to characterize the failure rate. The factors of interest were sample size (500, 1000), type II censoring with failure rates of 0.05, 0.10, and 0.20, and three values for each of the non-time-dependent and time-dependent covariates (1/4,1/2,3/4).^ The mean of the bias of the estimator of the coefficient of the time-dependent covariate decreased as sample size and number of intervals increased whereas the mean of the bias increased as failure rate and true values of the covariates increased. The mean of the bias of the estimator of the coefficient was smallest when all of the updated measurements were used in the model compared with two models that used selected measurements of the time-dependent covariate. For the model that included all the measurements, the coverage rates of the estimator of the coefficient of the time-dependent covariate was in most cases 90% or more except when the failure rate was high (0.20). The power associated with testing a time-dependent effect was highest when all of the measurements of the time-dependent covariate were used. An example from the Systolic Hypertension in the Elderly Program Cooperative Research Group is presented. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this research is to develop a new statistical method to determine the minimum set of rows (R) in a R x C contingency table of discrete data that explains the dependence of observations. The statistical power of the method will be empirically determined by computer simulation to judge its efficiency over the presently existing methods. The method will be applied to data on DNA fragment length variation at six VNTR loci in over 72 populations from five major racial groups of human (total sample size is over 15,000 individuals; each sample having at least 50 individuals). DNA fragment lengths grouped in bins will form the basis of studying inter-population DNA variation within the racial groups are significant, will provide a rigorous re-binning procedure for forensic computation of DNA profile frequencies that takes into account intra-racial DNA variation among populations. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fifteen iron oxide accumulations from the bottoms of two Finnish lakes ("lake ores") were found to contain as much as 50% Fe. Differential X-ray powder diffraction and selective dissolution by oxalate showed that the samples consisted of poorly crystallized goethite and ferrihydrite. The crust ores of one lake had higher ferrihydrite to goethite ratios than the nodular ores of the other lake. The higher ferrihydrite proportion was attributed to a higher rate of Fe2+ supply from the ground water and/or a higher rate of oxidation as a function of water depth and bottom-sediment permeability. Values of Al-for-Fe substitution of the goethites determined from unit-cell dimensions agreed with those obtained from chemical extraction if the unit-cell volume rather than the c dimension was used. In very small goethite crystals a slight expansion of the a unit-cell dimension is probaby compensated by a corresponding contraction of the c dimension, so that a contraction of the c dimension need not necessarily be caused by Al substitution. The goethites of the two lakes differed significantly in their Al-for-Fe substitutions and hence in their unit-cell sizes, OH-bending characteristics, dehydroxylation temperatures, dissolution kinetics, and Mössbauer parameters. The difference in Al substitution (0 vs. 7 mole %) is attributed to the Al-supplying power of the bottom sediments: the silty-clayey sediments in one lake appear to have supplied A1 during goethite formation, whereas the gravelly-sandy sediments in the other lake did not. The compositions of the goethites thus reflect their environments of formation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Metamodels have proven be very useful when it comes to reducing the computational requirements of Evolutionary Algorithm-based optimization by acting as quick-solving surrogates for slow-solving fitness functions. The relationship between metamodel scope and objective function varies between applications, that is, in some cases the metamodel acts as a surrogate for the whole fitness function, whereas in other cases it replaces only a component of the fitness function. This paper presents a formalized qualitative process to evaluate a fitness function to determine the most suitable metamodel scope so as to increase the likelihood of calibrating a high-fidelity metamodel and hence obtain good optimization results in a reasonable amount of time. The process is applied to the risk-based optimization of water distribution systems; a very computationally-intensive problem for real-world systems. The process is validated with a simple case study (modified New York Tunnels) and the power of metamodelling is demonstrated on a real-world case study (Pacific City) with a computational speed-up of several orders of magnitude.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We introduce two probabilistic, data-driven models that predict a ship's speed and the situations where a ship is probable to get stuck in ice based on the joint effect of ice features such as the thickness and concentration of level ice, ice ridges, rafted ice, moreover ice compression is considered. To develop the models to datasets were utilized. First, the data from the Automatic Identification System about the performance of a selected ship was used. Second, a numerical ice model HELMI, developed in the Finnish Meteorological Institute, provided information about the ice field. The relations between the ice conditions and ship movements were established using Bayesian learning algorithms. The case study presented in this paper considers a single and unassisted trip of an ice-strengthened bulk carrier between two Finnish ports in the presence of challenging ice conditions, which varied in time and space. The obtained results show good prediction power of the models. This means, on average 80% for predicting the ship's speed within specified bins, and above 90% for predicting cases where a ship may get stuck in ice. We expect this new approach to facilitate the safe and effective route selection problem for ice-covered waters where the ship performance is reflected in the objective function.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

All holes drilled during Leg 114 contained ice-rafted debris. Analysis of samples from Hole 699A, Site 701, and Hole 704A yielded a nearly complete history of ice-rafting episodes. The first influx of ice-rafted debris at Site 699, on the northeastern slope of the Northeast Georgia Rise, occurred at a depth of 69.94 m below seafloor (mbsf) in sediments of early Miocene age (23.54 Ma). This material is of the same type as later ice-rafted debris, but represents only a small percentage of the coarse fraction. Significant ice-rafting episodes occurred during Chron 5. Minor amounts of ice-rafted debris first reached Site 701, on the western flank of the Mid-Atlantic Ridge (8.78 Ma at 200.92 mbsf), and more arrived in the late Miocene (5.88 Ma). The first significant quantity of sand and gravel appeared at a depth of 107.76 mbsf (4.42 Ma). Site 704, on the southern part of the Meteor Rise, received very little or no ice-rafted debris prior to 2.46 Ma. At this time, however, the greatest influx of ice-rafted debris occurred at this site. This time of maximum ice rafting correlates reasonably well with influxes of ice-rafted debris at Sites 701 (2.24 Ma) and 699 (2.38 Ma), in consideration of sample spacing at these two sites. These peaks of ice rafting may be Sirius till equivalents, if the proposed Pliocene age of Sirius tills can be confirmed. After about 1.67 Ma, the apparent mass-accumulation rate of the sediments at Site 704 declined, but with major fluctuations. This decline may be the result of a decrease in the rate of delivery of detritus from Antarctica due to reduced erosive power of the glaciers or a northward shift in the Polar Front Zone, a change in the path taken by the icebergs, or any combination of these factors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ecological theory of adaptive radiation predicts that the evolution of phenotypic diversity within species is generated by divergent natural selection arising from different environments and competition between species. Genetic connectivity among populations is likely also to have an important role in both the origin and maintenance of adaptive genetic diversity. Our goal was to evaluate the potential roles of genetic connectivity and natural selection in the maintenance of adaptive phenotypic differences among morphs of Arctic charr, Salvelinus alpinus, in Iceland. At a large spatial scale, we tested the predictive power of geographic structure and phenotypic variation for patterns of neutral genetic variation among populations throughout Iceland. At a smaller scale, we evaluated the genetic differentiation between two morphs in Lake Thingvallavatn relative to historically explicit, coalescent-based null models of the evolutionary history of these lineages. At the large spatial scale, populations are highly differentiated, but weakly structured, both geographically and with respect to patterns of phenotypic variation. At the intralacustrine scale, we observe modest genetic differentiation between two morphs, but this level of differentiation is nonetheless consistent with strong reproductive isolation throughout the Holocene. Rather than a result of the homogenizing effect of gene flow in a system at migration-drift equilibrium, the modest level of genetic differentiation could equally be a result of slow neutral divergence by drift in large populations. We conclude that contemporary and recent patterns of restricted gene flow have been highly conducive to the evolution and maintenance of adaptive genetic variation in Icelandic Arctic charr.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Detailed information about the sediment properties and microstructure can be provided through the analysis of digital ultrasonic P wave seismograms recorded automatically during full waveform core logging. The physical parameter which predominantly affects the elastic wave propagation in water-saturated sediments is the P wave attenuation coefficient. The related sedimentological parameter is the grain size distribution. A set of high-resolution ultrasonic transmission seismograms (-50-500 kHz), which indicate downcore variations in the grain size by their signal shape and frequency content, are presented. Layers of coarse-grained foraminiferal ooze can be identified by highly attenuated P waves, whereas almost unattenuated waves are recorded in fine-grained areas of nannofossil ooze. Color -encoded pixel graphics of the seismograms and instantaneous frequencies present full waveform images of the lithology and attenuation. A modified spectral difference method is introduced to determine the attenuation coefficient and its power law a = kF. Applied to synthetic seismograms derived using a "constant Q" model, even low attenuation coefficients can be quantified. A downcore analysis gives an attenuation log which ranges from -700 dB/m at 400 kHz and a power of n=1-2 in coarse-grained sands to few decibels per meter and n :s; 0.5 in fine-grained clays. A least squares fit of a second degree polynomial describes the mutual relationship between the mean grain size and the attenuation coefficient. When it is used to predict the mean grain size, an almost perfect coincidence with the values derived from sedimentological measurements is achieved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND Zebrafish is a clinically-relevant model of heart regeneration. Unlike mammals, it has a remarkable heart repair capacity after injury, and promises novel translational applications. Amputation and cryoinjury models are key research tools for understanding injury response and regeneration in vivo. An understanding of the transcriptional responses following injury is needed to identify key players of heart tissue repair, as well as potential targets for boosting this property in humans. RESULTS We investigated amputation and cryoinjury in vivo models of heart damage in the zebrafish through unbiased, integrative analyses of independent molecular datasets. To detect genes with potential biological roles, we derived computational prediction models with microarray data from heart amputation experiments. We focused on a top-ranked set of genes highly activated in the early post-injury stage, whose activity was further verified in independent microarray datasets. Next, we performed independent validations of expression responses with qPCR in a cryoinjury model. Across in vivo models, the top candidates showed highly concordant responses at 1 and 3 days post-injury, which highlights the predictive power of our analysis strategies and the possible biological relevance of these genes. Top candidates are significantly involved in cell fate specification and differentiation, and include heart failure markers such as periostin, as well as potential new targets for heart regeneration. For example, ptgis and ca2 were overexpressed, while usp2a, a regulator of the p53 pathway, was down-regulated in our in vivo models. Interestingly, a high activity of ptgis and ca2 has been previously observed in failing hearts from rats and humans. CONCLUSIONS We identified genes with potential critical roles in the response to cardiac damage in the zebrafish. Their transcriptional activities are reproducible in different in vivo models of cardiac injury.