954 resultados para Current hosusehold survey
Resumo:
Objective. Weight gain after cancer treatment is associated with breast cancer recurrence. In order to prolong cancer-free survivorship, interventions to manage post-diagnosis weight are sometimes conducted. However, little is known about what factors are associated with weight management behaviors among cancer survivors. In this study, we examined associations of demographic, clinical, and psychosocial variables with weight management behaviors in female breast cancer survivors. We also examined whether knowledge about post-diagnosis weight gain and its risk is associated with weight management behaviors. ^ Methods. 251 female breast cancer survivors completed an internet survey. They reported current performance of three weight management behaviors (general weight management, physical activity, and healthy diet). We also measured attitude, elf-efficacy, knowledge and social support regarding these behaviors along with demographic and clinical characteristics. ^ Results. Multiple regression models for the weight management behaviors explained 17% of the variance in general weight management, 45% in physical activity and 34% in healthy dieting. The models had 9–14 predictor variables which differed in each model. The variables associated with all three behaviors were social support and self-efficacy. Self-efficacy showed the strongest contribution in all models. The knowledge about weight gain and its risks was not associated with any weight management behaviors. However, women who obtained the knowledge during cancer treatment were more likely to engage in physical activity and healthy dieting. ^ Conclusions. The findings suggest that an intervention designed to increase their self-efficacy to manage weight, to be physically active, to eat healthy will effectively promote survivors to engage in these behaviors. Knowledge may motivate women to manage post-diagnosis weight about risk if information is provided during cancer treatment.^
Resumo:
This investigation compares two different methodologies for calculating the national cost of epilepsy: provider-based survey method (PBSM) and the patient-based medical charts and billing method (PBMC&BM). The PBSM uses the National Hospital Discharge Survey (NHDS), the National Hospital Ambulatory Medical Care Survey (NHAMCS) and the National Ambulatory Medical Care Survey (NAMCS) as the sources of utilization. The PBMC&BM uses patient data, charts and billings, to determine utilization rates for specific components of hospital, physician and drug prescriptions. ^ The 1995 hospital and physician cost of epilepsy is estimated to be $722 million using the PBSM and $1,058 million using the PBMC&BM. The difference of $336 million results from $136 million difference in utilization and $200 million difference in unit cost. ^ Utilization. The utilization difference of $136 million is composed of an inpatient variation of $129 million, $100 million hospital and $29 million physician, and an ambulatory variation of $7 million. The $100 million hospital variance is attributed to inclusion of febrile seizures in the PBSM, $−79 million, and the exclusion of admissions attributed to epilepsy, $179 million. The former suggests that the diagnostic codes used in the NHDS may not properly match the current definition of epilepsy as used in the PBMC&BM. The latter suggests NHDS errors in the attribution of an admission to the principal diagnosis. ^ The $29 million variance in inpatient physician utilization is the result of different per-day-of-care physician visit rates, 1.3 for the PBMC&BM versus 1.0 for the PBSM. The absence of visit frequency measures in the NHDS affects the internal validity of the PBSM estimate and requires the investigator to make conservative assumptions. ^ The remaining ambulatory resource utilization variance is $7 million. Of this amount, $22 million is the result of an underestimate of ancillaries in the NHAMCS and NAMCS extrapolations using the patient visit weight. ^ Unit cost. The resource cost variation is $200 million, inpatient is $22 million and ambulatory is $178 million. The inpatient variation of $22 million is composed of $19 million in hospital per day rates, due to a higher cost per day in the PBMC&BM, and $3 million in physician visit rates, due to a higher cost per visit in the PBMC&BM. ^ The ambulatory cost variance is $178 million, composed of higher per-physician-visit costs of $97 million and higher per-ancillary costs of $81 million. Both are attributed to the PBMC&BM's precise identification of resource utilization that permits accurate valuation. ^ Conclusion. Both methods have specific limitations. The PBSM strengths are its sample designs that lead to nationally representative estimates and permit statistical point and confidence interval estimation for the nation for certain variables under investigation. However, the findings of this investigation suggest the internal validity of the estimates derived is questionable and important additional information required to precisely estimate the cost of an illness is absent. ^ The PBMC&BM is a superior method in identifying resources utilized in the physician encounter with the patient permitting more accurate valuation. However, the PBMC&BM does not have the statistical reliability of the PBSM; it relies on synthesized national prevalence estimates to extrapolate a national cost estimate. While precision is important, the ability to generalize to the nation may be limited due to the small number of patients that are followed. ^
Resumo:
This cross-sectional analysis of the data from the Third National Health and Nutrition Examination Survey was conducted to determine the prevalence and determinants of asthma and wheezing among US adults, and to identify the occupations and industries at high risk of developing work-related asthma and work-related wheezing. Separate logistic models were developed for physician-diagnosed asthma (MD asthma), wheezing in the previous 12 months (wheezing), work-related asthma and work-related wheezing. Major risk factors including demographic, socioeconomic, indoor air quality, allergy, and other characteristics were analyzed. The prevalence of lifetime MD asthma was 7.7% and the prevalence of wheezing was 17.2%. Mexican-Americans exhibited the lowest prevalence of MD asthma (4.8%; 95% confidence interval (CI): 4.2, 5.4) when compared to other race-ethnic groups. The prevalence of MD asthma or wheezing did not vary by gender. Multiple logistic regression analysis showed that Mexican-Americans were less likely to develop MD asthma (adjusted odds ratio (ORa) = 0.64, 95%CI: 0.45, 0.90) and wheezing (ORa = 0.55, 95%CI: 0.44, 0.69) when compared to non-Hispanic whites. Low education level, current and past smoking status, pet ownership, lifetime diagnosis of physician-diagnosed hay fever and obesity were all significantly associated with MD asthma and wheezing. No significant effect of indoor air pollutants on asthma and wheezing was observed in this study. The prevalence of work-related asthma was 3.70% (95%CI: 2.88, 4.52) and the prevalence of work-related wheezing was 11.46% (95%CI: 9.87, 13.05). The major occupations identified at risk of developing work-related asthma and wheezing were cleaners; farm and agriculture related occupations; entertainment related occupations; protective service occupations; construction; mechanics and repairers; textile; fabricators and assemblers; other transportation and material moving occupations; freight, stock and material movers; motor vehicle operators; and equipment cleaners. The population attributable risk for work-related asthma and wheeze were 26% and 27% respectively. The major industries identified at risk of work-related asthma and wheeze include entertainment related industry; agriculture, forestry and fishing; construction; electrical machinery; repair services; and lodging places. The population attributable risk for work-related asthma was 36.5% and work-related wheezing was 28.5% for industries. Asthma remains an important public health issue in the US and in the other regions of the world. ^
Resumo:
An extensive set of conductivity-temperature-depth (CTD)/lowered acoustic Doppler current profiler (LADCP) data obtained within the northwestern Weddell Sea in August 1997 characterizes the dense water outflow from the Weddell Sea and overflow into the Scotia Sea. Along the outer rim of the Weddell Gyre, there is a stream of relatively low salinity, high oxygen Weddell Sea Deep Water (defined as water between 0° and ?0.7°C), constituting a more ventilated form of this water mass than that found farther within the gyre. Its enhanced ventilation is due to injection of relatively low salinity shelf water found near the northern extreme of Antarctic Peninsula's Weddell Sea shelf, shelf water too buoyant to descend to the deep-sea floor. The more ventilated form of Weddell Sea Deep Water flows northward along the eastern side of the South Orkney Plateau, passing into the Scotia Sea rather than continuing along an eastward path in the northern Weddell Sea. Weddell Sea Bottom Water also exhibits two forms: a low-salinity, better oxygenated component confined to the outer rim of the Weddell Gyre, and a more saline, less oxygenated component observed farther into the gyre. The more saline Weddell Sea Bottom Water is derived from the southwestern Weddell Sea, where high-salinity shelf water is abundant. The less saline Weddell Sea Bottom Water, like the more ventilated Weddell Sea Deep Water, is derived from lower-salinity shelf water at a point farther north along the Antarctic Peninsula. Transports of Weddell Sea Deep and Bottom Water masses crossing 44°W estimated from one LADCP survey are 25 ? 10**6 and 5 ? 10**6 m**3/s, respectively. The low-salinity, better ventilated forms of Weddell Sea Deep and Bottom Water flowing along the outer rim of the Weddell Gyre have the position and depth range that would lead to overflow of the topographic confines of the Weddell Basin, whereas the more saline forms may be forced to recirculate within the Weddell Gyre.
Resumo:
We report the northernmost and deepest known occurrence of deep-water pycnodontine oysters, based on two surveys along the French Atlantic continental margin to the La Chapelle continental slope (2006) and the Guilvinec Canyon (2008). The combined use of multibeam bathymetry, seismic profiling, CTD casts and a remotely operated vehicle (ROV) made it possible to describe the physical habitat and to assess the oceanographic control for the recently described species Neopycnodonte zibrowii. These oysters have been observed in vivo in depths from 540 to 846 m, colonizing overhanging banks or escarpments protruding from steep canyon flanks. Especially in the Bay of Biscay, such physical habitats may only be observed within canyons, where they are created by both long-term turbiditic and contouritic processes. Frequent observations of sand ripples on the seabed indicate the presence of a steady, but enhanced bottom current of about 40 cm/s. The occurrence of oysters also coincides with the interface between the Eastern North Atlantic Water and the Mediterranean Outflow Water. A combination of this water mass mixing, internal tide generation and a strong primary surface productivity may generate an enhanced nutrient flux, which is funnelled through the canyon. When the ideal environmental conditions are met, up to 100 individuals per m² may be observed. These deep-water oysters require a vertical habitat, which is often incompatible with the requirements of other sessile organisms, and are only sparsely distributed along the continental margins. The discovery of these giant oyster banks illustrates the rich biodiversity of deep-sea canyons and their underestimation as true ecosystem hotspots.
Resumo:
BACKGROUND Double-checking is widely recommended as an essential method to prevent medication errors. However, prior research has shown that the concept of double-checking is not clearly defined, and that little is known about actual practice in oncology, for example, what kind of checking procedures are applied. OBJECTIVE To study the practice of different double-checking procedures in chemotherapy administration and to explore nurses' experiences, for example, how often they actually find errors using a certain procedure. General evaluations regarding double-checking, for example, frequency of interruptions during and caused by a check, or what is regarded as its essential feature was assessed. METHODS In a cross-sectional survey, qualified nurses working in oncology departments of 3 hospitals were asked to rate 5 different scenarios of double-checking procedures regarding dimensions such as frequency of use in practice and appropriateness to prevent medication errors; they were also asked general questions about double-checking. RESULTS Overall, 274 nurses (70% response rate) participated in the survey. The procedure of jointly double-checking (read-read back) was most commonly used (69% of respondents) and rated as very appropriate to prevent medication errors. Jointly checking medication was seen as the essential characteristic of double-checking-more frequently than 'carrying out checks independently' (54% vs 24%). Most nurses (78%) found the frequency of double-checking in their department appropriate. Being interrupted in one's own current activity for supporting a double-check was reported to occur frequently. Regression analysis revealed a strong preference towards checks that are currently implemented at the responders' workplace. CONCLUSIONS Double-checking is well regarded by oncology nurses as a procedure to help prevent errors, with jointly checking being used most frequently. Our results show that the notion of independent checking needs to be transferred more actively into clinical practice. The high frequency of reported interruptions during and caused by double-checks is of concern.
Resumo:
This article introduces the current agent-oriented methodologies. It discusses what approaches have been followed (mainly extending existing object oriented and knowledge engineering methodologies), the suitability of these approaches for agent modelling, and some conclusions drawn from the survey.
Resumo:
The traditional power grid is just a one-way supplier that gets no feedback data about the energy delivered, what tariffs could be the most suitable ones for customers, the shifting daily needs of electricity in a facility, etc. Therefore, it is only natural that efforts are being invested in improving power grid behavior and turning it into a Smart Grid. However, to this end, several components have to be either upgraded or created from scratch. Among the new components required, middleware appears as a critical one, for it will abstract all the diversity of the used devices for power transmission (smart meters, embedded systems, etc.) and will provide the application layer with a homogeneous interface involving power production and consumption management data that were not able to be provided before. Additionally, middleware is expected to guarantee that updates to the current metering infrastructure (changes in service or hardware availability) or any added legacy measuring appliance will get acknowledged for any future request. Finally, semantic features are of major importance to tackle scalability and interoperability issues. A survey on the most prominent middleware architectures for Smart Grids is presented in this paper, along with an evaluation of their features and their strong points and weaknesses.
Resumo:
Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines.
Resumo:
Currently, there is a plethora of solutions regarding interconnectivity and interoperability for networked robots so that they will fulfill their purposes in a coordinated manner. In addition to that, middleware architectures are becoming increasingly popular due to the advantages that they are capable of guaranteeing (hardware abstraction, information homogenization, easy access for the applications above, etc.). However, there are still scarce contributions regarding the global state of the art in intermediation architectures for underwater robotics. As far as the area of robotics is concerned, this is a major issue that must be tackled in order to get a holistic view of the existing proposals. This challenge is addressed in this paper by studying the most compelling pieces of work for this kind of software development in the current literature. The studied works have been assessed according to their most prominent features and capabilities. Furthermore, by studying the individual pieces of work and classifying them several common weaknesses have been revealed and are highlighted. This provides a starting ground for the development of a middleware architecture for underwater robotics capable of dealing with these issues.
Resumo:
Underwater acoustic sensor networks (UASNs) have become more and more important in ocean exploration applications, such as ocean monitoring, pollution detection, ocean resource management, underwater device maintenance, etc. In underwater acoustic sensor networks, since the routing protocol guarantees reliable and effective data transmission from the source node to the destination node, routing protocol design is an attractive topic for researchers. There are many routing algorithms have been proposed in recent years. To present the current state of development of UASN routing protocols, we review herein the UASN routing protocol designs reported in recent years. In this paper, all the routing protocols have been classified into different groups according to their characteristics and routing algorithms, such as the non-cross-layer design routing protocol, the traditional cross-layer design routing protocol, and the intelligent algorithm based routing protocol. This is also the first paper that introduces intelligent algorithm-based UASN routing protocols. In addition, in this paper, we investigate the development trends of UASN routing protocols, which can provide researchers with clear and direct insights for further research.
Resumo:
This Capstone represents a qualitative analysis of survey responses concerning river recreation management policies and techniques in the Western United States. Respondents were asked about management topics including permits and fees, monitoring, enforcement, resource management, recreational experience, and current and future demand for whitewater rafting. Responses with consistent results include those for questions concerning permits for commercial uses, justification of fees, and enforcement, while responses with variation in results were received for questions concerning permits for private uses, agency self-sufficiency, monitoring, and use capacity. Most respondents do not expect a significant increased demand for commercial or private boating in the next five years. Respondents that do expect an increase do not see a need for additional commercial outfitters.
Resumo:
The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field.
Resumo:
Background: The assessment of attitudes toward school with the objective of identifying adolescents who may be at risk of underachievement has become an important area of research in educational psychology, although few specific tools for their evaluation have been designed to date. One of the instruments available is the School Attitude Assessment Survey-Revised (SAAS-R). Method: The objective of the current research is to test the construct validity and to analyze the psychometric properties of the Spanish version of the SAAS-R. Data were collected from 1,398 students attending different high schools. Students completed the SAAS-R along with measures of the g factor, and academic achievement was obtained from school records. Results: Confirmatory factor analysis, multivariate analysis of variance and analysis of variance tests supported the validity evidence. Conclusions: The results indicate that the Spanish version of the SAAS-R is a useful measure that contributes to identification of underachieving students. Lastly, the results obtained and their implications for education are discussed.