812 resultados para Multi-level Analysis
Resumo:
This comprehensive book takes a psychological perspective on patient safety. It is based on the most recent theoretical and empirical research evidence from psychology (including clinical, work, and organizational psychology) and adjacent social and behavioral sciences such as human factors. Factors that influence safety-related experiences, behaviors, and outcomes of patients and professionals working in clinical settings such as medical practices and hospitals are reviewed, structured, and critically evaluated. Consistent with the complexity of the topic, the author takes a multi-level approach to patient safety, which includes a review of individual, team, and organizational factors and outcomes. The book describes how these factors, by themselves and in combination, can facilitate or impede patient safety. Individual factors include safety-relevant knowledge, skills, abilities, and personality traits such as conscientiousness and emotional stability. Team factors include group communication, training, and leadership. Finally, organizational factors include the safety culture and climate. Throughout the book, different evidence-based intervention programs are described that can help practitioners promote patient safety and prevent accidents. The book is a valuable resource for both researchers and practitioners interested in understanding, maintaining, and improving patient safety in a variety of applied settings. It is based on the most up-to-date research evidence from psychology and neighboring disciplines, and it is written in a clear and non-technical language understandable for a wide audience.
Resumo:
Microarrays are high throughput biological assays that allow the screening of thousands of genes for their expression. The main idea behind microarrays is to compute for each gene a unique signal that is directly proportional to the quantity of mRNA that was hybridized on the chip. A large number of steps and errors associated with each step make the generated expression signal noisy. As a result, microarray data need to be carefully pre-processed before their analysis can be assumed to lead to reliable and biologically relevant conclusions. This thesis focuses on developing methods for improving gene signal and further utilizing this improved signal for higher level analysis. To achieve this, first, approaches for designing microarray experiments using various optimality criteria, considering both biological and technical replicates, are described. A carefully designed experiment leads to signal with low noise, as the effect of unwanted variations is minimized and the precision of the estimates of the parameters of interest are maximized. Second, a system for improving the gene signal by using three scans at varying scanner sensitivities is developed. A novel Bayesian latent intensity model is then applied on these three sets of expression values, corresponding to the three scans, to estimate the suitably calibrated true signal of genes. Third, a novel image segmentation approach that segregates the fluorescent signal from the undesired noise is developed using an additional dye, SYBR green RNA II. This technique helped in identifying signal only with respect to the hybridized DNA, and signal corresponding to dust, scratch, spilling of dye, and other noises, are avoided. Fourth, an integrated statistical model is developed, where signal correction, systematic array effects, dye effects, and differential expression, are modelled jointly as opposed to a sequential application of several methods of analysis. The methods described in here have been tested only for cDNA microarrays, but can also, with some modifications, be applied to other high-throughput technologies. Keywords: High-throughput technology, microarray, cDNA, multiple scans, Bayesian hierarchical models, image analysis, experimental design, MCMC, WinBUGS.
Resumo:
Sown pasture rundown and declining soil fertility for forage crops are too serious to ignore with losses in beef production of up to 50% across Queensland. The feasibility of using strategic applications of nitrogen (N) fertiliser to address these losses was assessed by analysing a series of scenarios using data drawn from published studies, local fertiliser trials and expert opinion. While N fertilser can dramatically increase productivity (growth, feed quality and beef production gains of over 200% in some scenarios), the estimated economic benefits, derived from paddock level enterprise budgets for a fattening operation, were much more modest. In the best-performing sown grass scenarios, average gross margins were doubled or tripled at the assumed fertiliser response rates, and internal rates of return of up to 11% were achieved. Using fertiliser on forage sorghum or oats was a much less attractive option and, under the paddock level analysis and assumptions used, forages struggled to be profitable even on fertile sites with no fertiliser input. The economics of nitrogen fertilising on grass pasture were sensitive to the assumed response rates in both pasture growth and liveweight gain. Consequently, targeted research is proposed to re-assess the responses used in this analysis, which are largely based on research 25-40 years ago when soils were generally more fertile and pastures less rundown.
Resumo:
The Capercaillie (Tetrao urogallus L.) is often used as a focal species for landscape ecological studies: the minimum size for its lekking area is 300 ha, and the annual home range for an individual may cover 30 80 km2. In Finland, Capercaillie populations have decreased by approximately 40 85%, with the declines likely to have started in the 1940s. Although the declines have partly stabilized from the 1990s onwards, it is obvious that the negative population trend was at least partly caused by changes in human land use. The aim of this thesis was to study the connections between human land use and Capercaillie populations in Finland, using several spatial and temporal scales. First, the effect of forest age structure on Capercaillie population trends was studied in 18 forestry board districts in Finland, during 1965 1988. Second, the abundances of Capercaillie and Moose (Alces alces L.) were compared in terms of several land-use variables on a scale of 50 × 50 km grids and in five regions in Finland. Third, the effects of forest cover and fine-grain forest fragmentation on Capercaillie lekking area persistence were studied in three study locations in Finland, on 1000 and 3000 m spatial scales surrounding the leks. The analyses considering lekking areas were performed with two definitions for forest: > 60 and > 152 m3ha 1 of timber volume. The results show that patterns and processes at large spatial scales strongly influence Capercaillie in Finland. In particular, in southwestern and eastern Finland, high forest cover and low human impact were found to be beneficial for this species. Forest cover (> 60 m3ha 1 of timber) surrounding the lekking sites positively affected lekking area persistence only at the larger landscape scale (3000 m radius). The effects of older forest classes were hard to assess due to scarcity of older forests in several study areas. Young and middle-aged forest classes were common in the vicinity of areas with high Capercaillie abundances especially in northern Finland. The increase in the amount of younger forest classes did not provide a good explanation for Capercaillie population decline in 1965 1988. In addition, there was no significant connection between mature forests (> 152 m3ha 1 of timber) and lekking area persistence in Finland. It seems that in present-day Finnish landscapes, area covered with old forest is either too scarce to efficiently explain the abundance of Capercaillie and the persistence of the lekking areas, or the effect of forest age is only important when considering smaller spatial scales than the ones studied in this thesis. In conclusion, larger spatial scales should be considered for assessing the future Capercaillie management. According to the proposed multi-level planning, the first priority should be to secure the large, regional-scale forest cover, and the second priority should be to maintain fine-grained, heterogeneous structure within the separate forest patches. A management unit covering hundreds of hectares, or even tens or hundreds of square kilometers, should be covered, which requires regional-level land-use planning and co-operation between forest owners.
Resumo:
Considered to be the next generation of heat transfer fluids, nanofluids have been receiving a growing amount of attention in the past decade despite the controversy and inconsistencies that have been reported. Nanofluids have great potential in a wide range of fields, particularly for solar thermal applications. This paper presents a comprehensive review of the literature on the enhancements in thermophysical and rheological properties resulting from experimental works conducted on molten salt nanofluids that are used in solar thermal energy systems. It was found that an increase in specific heat of 10–30% was achieved for most nanofluids and appeared independent of particle size and to an extent mass concentration. The specific heat increase was attributed to the formation of nanostructures at the solid–liquid interface and it was also noted that the aggregation of nanoparticles has detrimental effects on the specific heat increase. Thermal conductivity was also found to increase, though less consistently, ranging from 3% to 35%. Viscosity was seen to increase with the addition of nanoparticles and is dependent on the amount of aggregation of the particles. An in-depth micro level analysis of the mechanisms behind the thermophysical property changes is presented in this paper. In addition, possible trends are discussed relating to current theorised mechanisms in an attempt to explain the behaviour of molten salt nanofluids.
Resumo:
Ductility based design of reinforced concrete structures implicitly assumes certain damage under the action of a design basis earthquake. The damage undergone by a structure needs to be quantified, so as to assess the post-seismic reparability and functionality of the structure. The paper presents an analytical method of quantification and location of seismic damage, through system identification methods. It may be noted that soft ground storied buildings are the major casualties in any earthquake and hence the example structure is a soft or weak first storied one, whose seismic response and temporal variation of damage are computed using a non-linear dynamic analysis program (IDARC) and compared with a normal structure. Time period based damage identification model is used and suitably calibrated with classic damage models. Regenerated stiffness of the three degrees of freedom model (for the three storied frame) is used to locate the damage, both on-line as well as after the seismic event. Multi resolution analysis using wavelets is also used for localized damage identification for soft storey columns.
Resumo:
The aim of the study was to examine the influence of school smoking policy and school smoking prevention programs on the smoking behaviour of students in high schools in Prince Edward Island using the School Health Action Planning Evaluation System (SHAPES). A total sample included 13,131 observations of students in grades 10-12 in ten high schools in Prince Edward Island over three waves of data collection (1999, 2000, and 2001). Changes in prevalence of smoking and factors influencing smoking behaviour were analyzed using descriptive statistics and Chi-Square tests. Multi-level logistic regression analyses were used to examine how both school and student characteristics were associated with smoking behaviour (I, II, III, IV). Since students were located within schools, a basic 2-level nested structure was used in which individual students (level 1) were nested within schools (level 2). For grade 12 students, the combination of both school policies and programs was not associated with the risk of smoking and the presence of the new policy was not associated with decreased risk of smoking, unless there were clear rules in place (I). For the grade 10 study, (II) schools with both policies and programs were not associated with decreased risk of smoking. However, the smoking behaviour of older students (grade 12) at a school was associated with younger students’ (grade 10) smoking behaviour. Students first enrolled in a high school in grade 9, rather than grade 10, were at increased risk of occasional smoking. For students who transitioned from grade 10 to 12 (III), close friends smoking had a substantial influence on smoking behaviour for both males and females (III). Having one or more close friends who smoke (Odds Ratio (OR) = 37.46; 95% CI = 19.39 to 72.36), one or more smokers in the home (OR = 2.35; 95% CI = 1.67 to 3.30) and seeing teachers and staff smoking on or near school property (OR=1.78; 95% CI = 1.13 to 2.80), were strongly associated with increased risk of smoking for grade 12 students. Smoking behaviour increased for both junior (Group 1) and senior (Group 2) students (IV). Group 1 students indicated a greater decrease in smoking behaviour and factors influencing smoking behaviour compared to those of Group 2. Students overestimating the percentage of youth their age who smoke was strongly associated with increased likelihood of smoking. Smoking rates showed a decreasing trend (1999, 2000, and 2001). However, policies and programs alone were not successful in influencing smoking behaviour of youth. Rather, factors within the students and schools contextual environment influenced smoking behaviour. Comprehensive approaches are required for school-based tobacco prevention interventions. Keywords: schools, policy, programs, smoking prevention, adolescents Subject Terms: school-based programming, public health, health promotion
Resumo:
In this paper, we study the behaviour of the slotted Aloha multiple access scheme with a finite number of users under different traffic loads and optimize the retransmission probability q(r) for various settings, cost objectives and policies. First, we formulate the problem as a parameter optimization problem and use certain efficient smoothed functional algorithms for finding the optimal retransmission probability parameter. Next, we propose two classes of multi-level closed-loop feedback policies (for finding in each case the retransmission probability qr that now depends on the current system state) and apply the above algorithms for finding an optimal policy within each class of policies. While one of the policy classes depends on the number of backlogged nodes in the system, the other depends on the number of time slots since the last successful transmission. The latter policies are more realistic as it is difficult to keep track of the number of backlogged nodes at each instant. We investigate the effect of increasing the number of levels in the feedback policies. Wen also investigate the effects of using different cost functions (withn and without penalization) in our algorithms and the corresponding change in the throughput and delay using these. Both of our algorithms use two-timescale stochastic approximation. One of the algorithms uses one simulation while the other uses two simulations of the system. The two-simulation algorithm is seen to perform better than the other algorithm. Optimal multi-level closed-loop policies are seen to perform better than optimal open-loop policies. The performance further improves when more levels are used in the feedback policies.
Resumo:
In this short essay I offer some “business researcher” advice on how to leverage a strong background in psychology when attempting to contribute to the maturing field of “entrepreneurship research”. Psychologists can benefit from within-discipline research, e.g. on emergence, small groups, fit, and expertise as well as method strengths in, e.g. experimentation, operationalisation of constructs, and multi-level modelling. However, achieving full leverage of these strengths requires a clear conceptualisation of “entrepreneurship” as well as insights into the challenges posed by the nature of this class of phenomena.
Resumo:
The role of cobalt centers in promoting the recombination and trapping processes in n-type germanium has been investigated. Data on lifetime measurements carried out by the steadystate photoconductivity and photo-magneto-electric methods in the temperature range 145 to 300°K on n-type germanium samples containing cobalt in the concentration range 1·1013 to 5.·014/cm3 are presented. The results are analysed on the basis of Sah-Shockley's multi-level formula to yield the capture cross-sections Sp= (hole capture cross-section at doubly negatively charged center) and Sn-(electron capture cross-section at singly negatively charged center) and temperature dependence. It is found that Sp= is (22 ± 6). 10-16 cm2 and Sn- is ∼ 0·1. 10-16 cm2 at 145°K. Sp= varies (n = 3·5 to 4·5) in the range 145-220°K; above 225°K the index 'n' tends to a smaller value. Sn- is practically temperature independent below 180°K and increases with increase of temperature above 180°K. The value of Sp= and its temperature variation lead one to the conclusion that during capture at attractive centers, the phonon cascade mechanism is responsible for the dissipation of the recombination energy.
Resumo:
A multilevel inverter with 12-sided polygonal voltage space vector structure is proposed in this paper. The present scheme provides elimination of common mode voltage variation and 5(th) and 7(th) order harmonics in the entire operating range of the drive. The proposed multi level structure is achieved by cascading only the conventional two-level inverters with asymmetrical DC link voltages. The bandwidths problems associated with conventional hexagonal voltage space vector structure current controllers, due to the presence of 5(th) and 7(th) harmonics, in the over modulation region, is absent in the present 12-sided structure. So a linear voltage control up to 12-step operation is possible, from the present twelve sided scheme, with less current control complexity. An open-end winding structure is used for the induction motor drive.
Resumo:
Glioblastoma (GBM; grade IV astrocytoma) is a very aggressive form of brain cancer with a poor survival and few qualified predictive markers. This study integrates experimentally validated genes that showed specific upregulation in GBM along with their protein-protein interaction information. A system level analysis was used to construct GBM-specific network. Computation of topological parameters of networks showed scale-free pattern and hierarchical organization. From the large network involving 1,447 proteins, we synthesized subnetworks and annotated them with highly enriched biological processes. A careful dissection of the functional modules, important nodes, and their connections identified two novel intermediary molecules CSK21 and protein phosphatase 1 alpha (PP1A) connecting the two subnetworks CDC2-PTEN-TOP2A-CAV1-P53 and CDC2-CAV1-RB-P53-PTEN, respectively. Real-time quantitative reverse transcription-PCR analysis revealed CSK21 to be moderately upregulated and PP1A to be overexpressed by 20-fold in GBM tumor samples. Immunohistochemical staining revealed nuclear expression of PP1A only in GBM samples. Thus, CSK21 and PP1A, whose functions are intimately associated with cell cycle regulation, might play key role in gliomagenesis. Cancer Res; 70(16); 6437-47. (C)2010 AACR.
Resumo:
M.A. (Educ.) Anu Kajamaa from the University of Helsinki, Center for Research on Activity, Development and Learning (CRADLE), examines change efforts and their consequences in health care in the public sector. The aim of her academic dissertation is, by providing a new conceptual framework, to widen our understanding of organizational change efforts and their consequences and managerial challenges. Despite the multiple change efforts, the results of health care development projects have not been very promising, and many developmental needs and managerial challenges exist. The study challenges the predominant, well-framed health care change paradigm and calls for an expanded view to explore the underlying issues and multiplicities of change efforts and their consequences. The study asks what kind of expanded conceptual framework is needed to better understand organizational change as transcending currently dominant oppositions in management thinking, specifically in the field of health care. The study includes five explorative case studies of health care change efforts and their consequences in Finland. Theory and practice are tightly interconnected in the study. The methodology of the study integrates the ethnography of organizational change, a narrative approach and cultural-historical activity theory. From the stance of activity theory, historicity, contradictions, locality and employee participation play significant roles in developing health care. The empirical data of the study has mainly been collected in two projects, funded by the Finnish Work Environment Fund. The data was collected in public sector health care organizations during the years 2004-2010. By exploring the oppositions between distinct views on organizational change and the multi-site, multi-level and multi-logic of organizational change, the study develops an expanded, multidimensional activity-theoretical framework on organizational change and management thinking. The findings of the study contribute to activity theory and organization studies, and provide information for health care management and practitioners. The study illuminates that continuous development efforts bridged to one another and anchored to collectively created new activity models can lead to significant improvements and organizational learning in health care. The study presents such expansive learning processes. The ways of conducting change efforts in organizations play a critical role in the creation of collective new practices and tools and in establishing ownership over them. Some of the studied change efforts were discontinuous or encapsulated, not benefiting the larger whole. The study shows that the stagnation and unexpected consequences of change efforts relate to the unconnectedness of the different organizational sites, levels and logics. If not dealt with, the unintended consequences such as obstacles, breaks and conflicts may stem promising change and learning processes.
Resumo:
High performance video standards use prediction techniques to achieve high picture quality at low bit rates. The type of prediction decides the bit rates and the image quality. Intra Prediction achieves high video quality with significant reduction in bit rate. This paper present an area optimized architecture for Intra prediction, for H.264 decoding at HDTV resolution with a target of achieving 60 fps. The architecture was validated on Virtex-5 FPGA based platform. The architecture achieves a frame rate of 64 fps. The architecture is based on multi-level memory hierarchy to reduce latency and ensure optimum resources utilization. It removes redundancy by reusing same functional blocks across different modes. The proposed architecture uses only 13% of the total LUTs available on the Xilinx FPGA XC5VLX50T.
Resumo:
Realizing the importance of aerosol characterization and addressing its spatio-temporal heterogeneities over Bay of Bengal (BoB), campaign mode observations of aerosol parameters were carried out using simultaneous cruise, aircraft and land-based measurements during the Winter Integrated Campaign for Aerosols gases and Radiation Budget (W_ICARB). Under this, airborne measurements of total and hemispheric backscatter coefficients were made over several regions of coastal India and eastern BoB using a three wavelength integrating nephelometer. The measurements include high resolution multi-level (ML) sorties for altitude profiles and bi-level (BL) sorties for spatial gradients within and above the Marine Atmospheric Boundary Layer (MABL) over BoB. The vertical profiles of the scattering coefficients are investigated in light of the information on the vertical structure of the atmospheric stability, which was derived from the collocated GPS (Global Positioning System) aided radiosonde ascents. In general, the altitude profiles revealed that the scattering coefficient remained steady in the convectively well-mixed regions and dropped off above the MABL. This decrease was quite rapid off the Indian mainland, while it was more gradual in the eastern BoB. Investigation on horizontal gradients revealed that the scattering coefficients over northern BoB are 3 to 4 times higher compared to that of central BoB within and above the MABL. A north-south gradient in scattering coefficients is observed over Port Blair in the eastern BoB, with values decreasing from south to north, which is attributed to the similar gradient in the surface wind speed, which can be replicated in the sea salt abundance. The gradients are parameterized using best-fit analytical functions.