939 resultados para sensory analysis, utilization of byproduct


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spatial independent component analysis (sICA) of functional magnetic resonance imaging (fMRI) time series can generate meaningful activation maps and associated descriptive signals, which are useful to evaluate datasets of the entire brain or selected portions of it. Besides computational implications, variations in the input dataset combined with the multivariate nature of ICA may lead to different spatial or temporal readouts of brain activation phenomena. By reducing and increasing a volume of interest (VOI), we applied sICA to different datasets from real activation experiments with multislice acquisition and single or multiple sensory-motor task-induced blood oxygenation level-dependent (BOLD) signal sources with different spatial and temporal structure. Using receiver operating characteristics (ROC) methodology for accuracy evaluation and multiple regression analysis as benchmark, we compared sICA decompositions of reduced and increased VOI fMRI time-series containing auditory, motor and hemifield visual activation occurring separately or simultaneously in time. Both approaches yielded valid results; however, the results of the increased VOI approach were spatially more accurate compared to the results of the decreased VOI approach. This is consistent with the capability of sICA to take advantage of extended samples of statistical observations and suggests that sICA is more powerful with extended rather than reduced VOI datasets to delineate brain activity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: The sensory drive hypothesis predicts that divergent sensory adaptation in different habitats may lead to premating isolation upon secondary contact of populations. Speciation by sensory drive has traditionally been treated as a special case of speciation as a byproduct of adaptation to divergent environments in geographically isolated populations. However, if habitats are heterogeneous, local adaptation in the sensory systems may cause the emergence of reproductively isolated species from a single unstructured population. In polychromatic fishes, visual sensitivity might become adapted to local ambient light regimes and the sensitivity might influence female preferences for male nuptial color. In this paper, we investigate the possibility of speciation by sensory drive as a byproduct of divergent visual adaptation within a single initially unstructured population. We use models based on explicit genetic mechanisms for color vision and nuptial coloration. RESULTS: We show that in simulations in which the adaptive evolution of visual pigments and color perception are explicitly modeled, sensory drive can promote speciation along a short selection gradient within a continuous habitat and population. We assumed that color perception evolves to adapt to the modal light environment that individuals experience and that females prefer to mate with males whose nuptial color they are most sensitive to. In our simulations color perception depends on the absorption spectra of an individual's visual pigments. Speciation occurred most frequently when the steepness of the environmental light gradient was intermediate and dispersal distance of offspring was relatively small. In addition, our results predict that mutations that cause large shifts in the wavelength of peak absorption promote speciation, whereas we did not observe speciation when peak absorption evolved by stepwise mutations with small effect. CONCLUSION: The results suggest that speciation can occur where environmental gradients create divergent selection on sensory modalities that are used in mate choice. Evidence for such gradients exists from several animal groups, and from freshwater and marine fishes in particular. The probability of speciation in a continuous population under such conditions may then critically depend on the genetic architecture of perceptual adaptation and female mate choice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study examined cellular mechanisms involved in the production and secretion of human (gamma)IFN. The hypothesis of this investigation was that (gamma)IFN is an export glycoprotein whose synthesis in human T lymphocytes is dependent on membrane stimulation, polypeptide synthesis in the rough endoplasmic reticulum, packaging in the Golgi complex, and release from the cell by exocytosis.^ The model system for this examination utilized T lymphocytes from normal donors and patients with chronic lymphocytic leukemia (CLL) induced in vitro with the tumor promoter, phorbol 12-myristate 13-acetate (PMA) and the lectin, phytohemagglutinin (PHA) to produce (gamma)IFN. This study reconfirmed the ability of PMA and PHA to synergistically induce (gamma)IFN production in normal T lymphocytes, as measured by viral inhibition assays and radio-immunoassays for (gamma)IFN. The leukemic T cells were demonstrated to produce (gamma)IFN in response to treatment with PHA. PMA treatment also induced (gamma)IFN production in the leukemic T cells, which was much greater than that observed in similarly treated normal T cells. In these same cells, however, combined treatment of the agents was shown to be ineffective at inducing (gamma)IFN production beyond the levels stimulated by the individual agents. In addition, the present study reiterated the synergistic effect of PMA/PHA on the stimulation of growth kinetics in normal T cells. The cell cycle of the leukemic T cells was also responsive to treatment with the agents, particularly with PMA treatment. A number of morphological alterations were attributed to PMA treatment including the acquisition of an elongated configuration, nuclear folds, and large cytoplasmic vacuoles. Many of the effects were observed to be reversible with dilution of the agents, and reversion to this state occurred more rapidly in the leukemic T cells. Most importantly, utilization of a thin section immuno-colloidal gold labelling technique for electron microscopy provided, for the first time, direct evidence of the cellular mechanism of (gamma)IFN production and secretion. The results of this latter study support the idea that (gamma)IFN is produced in the rough endoplasmic reticulum, transferred to the Golgi complex for accumulation and packaging, and released from the T cells by exocytosis. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Breast and cervical cancer screening rates continue to be lower in Hispanic women than other ethnic subgroups. Several factors have been identified that influence health care utilization. The use of preventive services (cancer screenings and adherence) in addition to yearly doctor visits are often used to measure health care utilization. A secondary analysis of an existing dataset containing baseline survey data collected from participants of an intervention trial to test the Cultivando La Salud (CLS) program was used to analyze the association between cultural health practice use (use of curandero,s obador, and herbal remedies) and health care utilization. The sample consisted of women 50 years of age and older living in farmer communities in four sites: Eagle Pass, TX, Anthony, NM, Merced, CA, and Watsonville, CA (n=708). Participants reported using a curandero (5.67%), sobador (29.79%), and herbal remedies (46.65%) at some point in their lives. The use of cultural health practices was found to significantly influence utilization of certain health care services: use of herbal remedies influence doctor visits, adherence to mammography screening and adherence to Pap test screening; use of a curandero influenced ever having a mammogram; use of a sobador influenced ever having a mammogram, ever having a Pap test, and Pap test adherence. In addition, women reporting use of curandero or herbal remedies were found to be more avoidant of the health care system than those that reported not using them. Further research is needed to further analyze the influence of cultural health practices on health care utilization. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives. To investigate procedural gender equity by assessing predisposing, enabling and need predictors of gender differences in annual medical expenditures and utilization among hypertensive individuals in the U.S. Also, to estimate and compare lifetime medical expenditures among hypertensive men and women in the U.S. ^ Data source. 2001-2004 the Medical Expenditure Panel Survey (MEPS);1986-2000 National Health Interview Survey (NHIS) and National Health Interview Survey linked to mortality in the National Death Index through 2002 (2002 NHIS-NDI). ^ Study design. We estimated total medical expenditure using four equations regression model, specific medical expenditures using two equations regression model and utilization using negative binomial regression model. Procedural equity was assessed by applying the Aday et al. theoretical framework. Expenditures were estimated in 2004 dollars. We estimated hypertension-attributable medical expenditure and utilization among men and women. ^ To estimate lifetime expenditures from ages 20 to 85+, we estimated medical expenditures with cross-sectional data and survival with prospective data. The four equations regression model were used to estimate average annual medical expenditures defined as sum of inpatient stay, emergency room visits, outpatient visits, office based visits, and prescription drugs expenditures. Life tables were used to estimate the distribution of life time medical expenditures for hypertensive men and women at different age and factors such as disease incidence, medical technology and health care cost were assumed to be fixed. Both total and hypertension attributable expenditures among men and women were estimated. ^ Data collection. We used the 2001-2004 MEPS household component and medical condition files; the NHIS person and condition files from 1986-1996 and 1997-2000 sample adult files were used; and the 1986-2000 NHIS that were linked to mortality in the 2002 NHIS-NDI. ^ Principal findings. Hypertensive men had significantly less utilization for most measures after controlling predisposing, enabling and need factors than hypertensive women. Similarly, hypertensive men had less prescription drug (-9.3%), office based (-7.2%) and total medical (-4.5%) expenditures than hypertensive women. However, men had more hypertension-attributable medical expenditures and utilization than women. ^ Expected total lifetime expenditure for average life table individuals at age 20, was $188,300 for hypertensive men and $254,910 for hypertensive women. But the lifetime expenditure that could be attributed to hypertension was $88,033 for men and $40,960 for women. ^ Conclusion. Hypertensive women had more utilization and expenditure for most measures than hypertensive men, possibly indicating procedural inequity. However, relatively higher hypertension-attributable health care of men shows more utilization of resources to treat hypertension related diseases among men than women. Similar results were reported in lifetime analyses.^ Key words: gender, medical expenditures, utilization, hypertension-attributable, lifetime expenditure ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this thesis is to identify "best practice" recommendations for successful implementation of the EPSDT outreach program at Memorial Health System's Hospital for Children in Colorado Springs through a policy analysis of Medicaid EPSDT services in Colorado. A successful program at Memorial will increase education and awareness of EPSDT services, enrollment, and access to and utilization of health care services for eligible children. Methodology utilized in this study included questionnaires designed for the EPSDT contract administrator and outreach coordinators/workers; analysis of current federal and state policies; and studies conducted at the federal and state level, and by various advocacy groups. The need for this analysis of EPSDT came about in part through an awareness of increasingly high numbers of children in poverty and who are uninsured. Though the percentage of children living in poverty in Colorado is slightly below the national average (see Table 2), according to data analyzed by The Annie E. Casey Foundation, the percentage of children (0-18) living in poverty in Colorado increased from 10% in 2000 to 16% in 2006, a dramatic increase of 60% surpassed by only one other state in the nation (The Annie E. Casey Foundation, 2008). By comparison, the U.S. percentage of children in poverty during the same time frame rose from 17% to 18% (The Annie E. Casey Foundation, 2008). What kind of health care services are available to this vulnerable and growing group of Coloradans, and what are the barriers that affect their enrollment in, access to and utilization of these health care services? Barriers identified included difficulty with the application process; system and process issues; a lack of providers; and a lack of awareness and knowledge of EPSDT. Fiscal restraints and legislation at the federal and state level are also barriers to increasing enrollment and access to services. Outreach services are a critical component of providing EPSDT services, and there were several recommendations regarding outreach and case management that will benefit the program in the future. Through this analysis and identification of a broad range of barriers, a clearer picture emerged of current challenges within the EPSDT program as well as a broad range of strategies and recommendations to address these challenges. Through increased education and advocacy for EPSDT and the services it encompasses; stronger collaboration and cooperation between all groups involved, including providing a Medical Home for all eligible children; and new legislation putting more money and focus on comprehensive health care for low-income uninsured children; enrollment, access to and utilization of developmentally appropriate and quality health care services can be achieved. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates how exchange rates affect the utilization of a free trade agreement (FTA) scheme in trading. Changes in exchange rates affect FTA utilization by two ways. The first way is by changing the excess profits gained by utilizing the FTA scheme, and the second way is by promoting the compliance of rules of origin. Our theoretical models predict that the depreciation of exporters' currency against that of importers enhances the likelihood of FTA utilization through those two channels. Furthermore, our empirical analysis, which is based on rich tariff-line-level data on the utilization of FTA schemes in Korea's imports from ASEAN countries, supports the theoretical prediction. We also show that the effects are smaller for more differentiated products.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper has analysed the effect of the utilization of internal finned tubes for the design of parabolic trough collectors with computational fluid dynamics tools. Our numerical approach has been qualified with the computational estimation of reported experimental data regarding phenomena involved in finned tube applications and solar irradiation of parabolic trough collector. The application of finned tubes to the design of parabolic trough collectors must take into account features as the pressure losses, thermal losses and thermo-mechanical stress and thermal fatigue. Our analysis shows an improvement potential in parabolic trough solar plants efficiency by the application of internal finned tubes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is a preliminary version of Chapter 3 of a State-of-the-Art Report by the IASS Working Group 5: Concrete Shell Roofs. The intention of this chapter is to set forth for those who intend to design concrete shell roofs information and advice about the selection, verification and utilization of commercial computer tools for analysis and design tasks.The computer analysis and design steps for a concrete shell roof are described. Advice follows on the aspects to be considered in the application of commercial finite element (FE)computer programs to concrete shell analysis, starting with recommendations on how novices can gain confidence and competence in the use of software. To establish vocabulary and provide background references, brief surveys are presented of, first,element types and formulations for shells and, second, challenges presented by advanced analyses of shells. The final section of the chapter indicates what capabilities to seek in selecting commercial FE software for the analysis and design of concrete shell roofs. Brief concluding remarks summarize advice regarding judicious use of computer analysis in design practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effect of the addition of a commercial enriched glutathione inactive dry yeast oenological preparation in the volatile and sensory properties of industrially manufactured rosé Grenache wines was evaluated during their shelf-life. In addition, triangle tests were performed at different times during wine aging (among 1 and 9 months) to determine the sensory differences between wines with and without glutathione inactive dry yeast preparations. Descriptive sensory analysis with a trained panel was carried out when sensory differences in the triangle test were noticed. In addition, consumer tests were performed in order to investigate consumers’ acceptability of wines. Results revealed significant sensory differences between control and glutathione inactive dry yeast wines after 9 months of aging. At that time, glutathione inactive dry yeast wines were more intense in fruity aromas (strawberry, banana) and less intense in yeast notes than control wine. The impact of the glutathione inactive dry yeast in the aroma might be the consequence of different effects that these preparations could induce in wine composition: modification of yeast byproducts during fermentation, release of volatile compounds from inactive dry yeast, interaction of wine volatile compounds with yeast macromolecules from inactive dry yeast and a possible antioxidant effect of the glutathione released by the inactive dry yeast preparation on some specific volatile compounds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding, and controlling, the conditions under which calcite precipitates within geothermal energy production systems is a key step in maintaining production efficiency. In this study, I apply methods of bulk and clumped isotope thermometry to an operating geothermal energy facility in northern Nevada to see how those methods can better inform the facility owner, AltaRock Energy, Inc., about the occurrence of calcite scale in their power plant. I have taken water samples from five production wells, the combined generator effluent, shallow cold-water wells, monitoring wells, and surface water. I also collected calcite scale samples from within the production system. Water samples were analyzed for stable oxygen isotope composition (d18O). Calcite samples were analyzed for stable oxygen and carbon (d13C) composition, and clumped isotope composition (D47). With two exceptions, the water compositions are very similar, likely indicating common origin and a well-mixed hydrothermal system. The calcite samples are likewise similar to one another. Apparent temperatures calculated from d18O values of water and calcite are lower than those recorded for the system. Apparent temperatures calculated from D47 are several degrees higher than the recorded well temperatures. The lower temperatures from the bulk isotope data are consistent with temperatures that could be expected during a de-pressurization of the production system, which would cause boiling in the pipes, a reduction in system temperature, and rapid precipitation of calcite scale. However, the high apparent temperature indicated by the D47 data suggests that the calcite is depleted in clumped isotopes given the known temperature of the system, which is inconsistent with this hypothesis. This depletion could instead result from disequilibrium isotopic fractionation during the aforementioned boil events, which would make both the apparent d18O-based and D47-based temperatures unrepresentative of the actual water temperature. This research can help improve our understanding of how isotopic analyses can better inform us about the movement of water through geothermal systems of the past and how it now moves through modern systems. Increased understanding of water movement in these systems could potentially allow for more efficient utilization of geothermal energy as a renewable resource.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the past decade, the utilization of ambulance data to inform the prevalence of nonfatal heroin overdose has increased. These data can assist public health policymakers, law enforcement agencies, and health providers in planning and allocating resources. This study examined the 672 ambulance attendances at nonfatal heroin overdoses in Queensland, Australia, in 2000. Gender distribution showed a typical 70/30 male-to-female ratio. An equal number of persons with nonfatal heroin overdose were between 15 and 24 years of age and 25 and 34 years of age. Police were present in only 1 of 6 cases, and 28.1% of patients reported using drugs alone. Ambulance data are proving to be a valuable population-based resource for describing the incidence and characteristics of nonfatal heroin overdose episodes. Future studies could focus on the differences between nonfatal heroin overdose and fatal heroin overdose samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Projects that are exposed to uncertain environments can be effectively controlled with the application of risk analysis during the planning stage. The Analytic Hierarchy Process, a multiattribute decision-making technique, can be used to analyse and assess project risks which are objective or subjective in nature. Among other advantages, the process logically integrates the various elements in the planning process. The results from risk analysis and activity analysis are then used to develop a logical contingency allowance for the project through the application of probability theory. The contingency allowance is created in two parts: (a) a technical contingency, and (b) a management contingency. This provides a basis for decision making in a changing project environment. Effective control of the project is made possible by the limitation of the changes within the monetary contingency allowance for the work package concerned, and the utilization of the contingency through proper appropriation. The whole methodology is applied to a pipeline-laying project in India, and its effectiveness in project control is demonstrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this research is to determine the influences of social, environmental, behavioral, and economic forces on the health care service utilization of four racial/ethnic groups of non-institutionalized elders in a multicultural urban environment. To address these issues this dissertation examines three intertwined themes of culture, aging, and health, using a sample of elders residing in Miami-Dade County, FL in four racial/ethnic groups: white non-Hispanic; black non-Hispanic English speakers; Cuban; and non-Cuban Hispanic. ^ The research questions were analyzed using both quantitative and qualitative data. Data for the quantitative component uses telephone survey data from the Dade County Needs Assessment. The purpose of this component is to develop a more comprehensive model of elder health care utilization behavior. The qualitative component uses data from focus groups from Dade County Needs Assessment, archival data and a literature review of previous ethnographic research. The purpose of this component is to gain a better understanding of the social construction of the terms “age”' and “aging,” as well as to place issues of health and health care in the lives of elders. ^ The findings raised several important issues. First, just because people share a common chronological age does not mean that they are the same in every other respect. Examining elders as a homogeneous group of users of formal health care services in a community is simplistic. Placing “aging” and “health” in a cultural context is important. My findings confirm that the meaning of “aging” and “old” are socially constructed. Further, the term “aging” is NOT synonymous with ill health or frailty. This was a consistent finding in both the quantitative and qualitative results. ^ While all aging individuals share a mutual orientation toward aging (i.e., biological process), they do not age the same way (i.e., social construction of “aging”). Thus, policymakers and others serving the elder population must be aware of the particular cultural context, as well as the previous life experiences of the individuals that they serve. This analysis documents the importance of culture and geographic community in understanding health care service utilization of elders. ^