25 resultados para Computer arithmetic and logic units.
Resumo:
PURPOSE: This longitudinal study aimed at comparing heart rate variability (HRV) in elite athletes identified either in 'fatigue' or in 'no-fatigue' state in 'real life' conditions. METHODS: 57 elite Nordic-skiers were surveyed over 4 years. R-R intervals were recorded supine (SU) and standing (ST). A fatigue state was quoted with a validated questionnaire. A multilevel linear regression model was used to analyze relationships between heart rate (HR) and HRV descriptors [total spectral power (TP), power in low (LF) and high frequency (HF) ranges expressed in ms(2) and normalized units (nu)] and the status without and with fatigue. The variables not distributed normally were transformed by taking their common logarithm (log10). RESULTS: 172 trials were identified as in a 'fatigue' and 891 as in 'no-fatigue' state. All supine HR and HRV parameters (Beta+/-SE) were significantly different (P<0.0001) between 'fatigue' and 'no-fatigue': HRSU (+6.27+/-0.61 bpm), logTPSU (-0.36+/-0.04), logLFSU (-0.27+/-0.04), logHFSU (-0.46+/-0.05), logLF/HFSU (+0.19+/-0.03), HFSU(nu) (-9.55+/-1.33). Differences were also significant (P<0.0001) in standing: HRST (+8.83+/-0.89), logTPST (-0.28+/-0.03), logLFST (-0.29+/-0.03), logHFST (-0.32+/-0.04). Also, intra-individual variance of HRV parameters was larger (P<0.05) in the 'fatigue' state (logTPSU: 0.26 vs. 0.07, logLFSU: 0.28 vs. 0.11, logHFSU: 0.32 vs. 0.08, logTPST: 0.13 vs. 0.07, logLFST: 0.16 vs. 0.07, logHFST: 0.25 vs. 0.14). CONCLUSION: HRV was significantly lower in 'fatigue' vs. 'no-fatigue' but accompanied with larger intra-individual variance of HRV parameters in 'fatigue'. The broader intra-individual variance of HRV parameters might encompass different changes from no-fatigue state, possibly reflecting different fatigue-induced alterations of HRV pattern.
Resumo:
Advances in Near-surface Seismology and Ground-penetrating Radar (SEG Geophysical Developments Series No. 15) is a collection of original papers by renowned and respected authors from around the world. Technologies used in the application of near-surface seismology and ground-penetrating radar have seen significant advances in the last several years. Both methods have benefited from new processing tools, increased computer speeds, and an expanded variety of applications. This book, divided into four sections ? ?Reviews,? ?Methodology,? ?Integrative Approaches,? and ?Case Studies? ? captures the most significant cutting-edge issues in active areas of research, unveiling truly pertinent studies that address fundamental applied problems. This collection of manuscripts grew from a core group of papers presented at a postconvention workshop, ?Advances in Near-surface Seismology and Ground-penetrating Radar,? held during the 2009 SEG Annual Meeting in Houston, Texas. This is the first cooperative publication effort between the near-surface communities of SEG, AGU, and EEGS. It will appeal to a large and diverse audience that includes researchers and practitioners inside and outside the near-surface geophysics community.
Resumo:
Since 2011, second year medical students from Lausanne University follow a single day course in the community health care centers of the Canton of Vaud. They discover the medico-social network and attend to patients' visits at home. They experience the importance of the information transmission and the partnership between informal caregivers, professional caregivers, general practitioner and hospital units. The goal of this course is to help the future physicians to collaborate with the community health care centers teams. This will be particularly important in the future with an aging and more dependant population.
Resumo:
There has been relatively little change over recent decades in the methods used in research on self-reported delinquency. Face-to-face interviews and selfadministered interviews in the classroom are still the predominant alternatives envisaged. New methods have been brought into the picture by recent computer technology, the Internet, and an increasing availability of computer equipment and Internet access in schools. In the autumn of 2004, a controlled experiment was conducted with 1,203 students in Lausanne (Switzerland), where "paper-and-pencil" questionnaires were compared with computer-assisted interviews through the Internet. The experiment included a test of two different definitions of the (same) reference period. After the introductory question ("Did you ever..."), students were asked how many times they had done it (or experienced it), if ever, "over the last 12 months" or "since the October 2003 vacation". Few significant differences were found between the results obtained by the two methods and for the two definitions of the reference period, in the answers concerning victimisation, self-reported delinquency, drug use, failure to respond (missing data). Students were found to be more motivated to respond through the Internet, take less time for filling out the questionnaire, and were apparently more confident of privacy, while the school principals were less reluctant to allow classes to be interviewed through the Internet. The Internet method also involves considerable cost reductions, which is a critical advantage if self-reported delinquency surveys are to become a routinely applied method of evaluation, particularly so in countries with limited resources. On balance, the Internet may be instrumental in making research on self-reported delinquency far more feasible in situations where limited resources so far have prevented its implementation.
Resumo:
BACKGROUND: Empirical antibacterial therapy in hospitals is usually guided by local epidemiologic features reflected by institutional cumulative antibiograms. We investigated additional information inferred by aggregating cumulative antibiograms by type of unit or according to the place of acquisition (i.e. community vs. hospital) of the bacteria. MATERIALS AND METHODS: Antimicrobial susceptibility rates of selected pathogens were collected over a 4-year period in an university-affiliated hospital. Hospital-wide antibiograms were compared with those selected by type of unit and sampling time (<48 or >48 h after hospital admission). RESULTS: Strains isolated >48 h after admission were less susceptible than those presumably arising from the community (<48 h). The comparison of units revealed significant differences among strains isolated >48 h after admission. When compared to hospital-wide antibiograms, susceptibility rates were lower in the ICU and surgical units for Escherichia coli to amoxicillin-clavulanate, enterococci to penicillin, and Pseudomonas aeruginosa to anti-pseudomonal beta-lactams, and in medical units for Staphylococcus aureus to oxacillin. In contrast, few differences were observed among strains isolated within 48 h of admission. CONCLUSIONS: Hospital-wide antibiograms reflect the susceptibility pattern for a specific unit with respect to community-acquired, but not to hospital-acquired strains. Antibiograms adjusted to these parameters may be useful in guiding the choice of empirical antibacterial therapy.
Resumo:
In this chapter the tension between the tendency of scientific disciplines to "diversify" and the capacities of universities to give new scientific fields an institutional "home" is tackled. The assumption is that new scientific fields must find support among scientists and cognitive units of universities in order to be included. As science is a strongly competitive social field, inclusion often meets resistance. It is argued in this chapter that opportunities for new scientific fields to be included depend on the kind of governance regimes ruling universities. A comparison of the former bureaucratic-oligarchic governance model in most European universities with the existing new public management governance model demonstrates that the propensity of universities to include new scientific fields has increased though there might be a price to pay in terms of which fields stand a chance of being integrated and in terms of institutional possibilities for the invention of new ideas.
Resumo:
Introduction Medication errors in hospitalsmay occur at any step of the medication process including prescription, transcription, preparation and administration, and may originate with any of the actors involved. Neonatal intensive care units (NICU) take care of extremely frail patients in whom errors could have dramatic consequences. Our objective was to assess the frequency and nature of medication errors in the NICU of a university hospital in order to propose measures for improvement.Materials & Methods The design was that of an observational prospective study over 4 consecutivemonths. All patients receiving C 3drugs were included. For each patient, observations during the different stages were compiled in a computer formulary and compared with the litterature. Setting: The 11-bed NICU of our university hospital.Main outcome measures:(a) Frequency and nature of medication errors in prescription,transcription, preparation and administration.(b) Drugs affected by errors.Results 83 patients were included. 505 prescriptions and transcriptions, 447 preparations and 464 administrations were analyzed. 220 medications errors were observed: 102 (46.4%) at prescription, 25 (11.4%) at transcription, 19 (8.6%) at preparation and 73 (33.2%) at administration. Uncomplete/ambiguous orders (24; 23.5%) were the most common errors observed at prescription, followed by wrong name (21; 20.6%), wrong dose (17; 16.7%) and omission (15; 14.7%). Wrong time (33; 45.2%) and wrong administration technique (31; 42.5%) were the most important medication errors during administration. According to the ATC classification, systemic antibacterials (53; 24.1%) were the most implicated, followed by perfusion solutions (40; 18.2%), respiratory system products (30; 13.6%), and mineral supplements and antithrombotic agents (20; 9.1%).Discussions, Conclusion Proposed recommendations: ? Better teaching of neonatal prescription to medical interns;? Improved prescription form to avoid omissions and ambiguities;? Development of a neonatal drug formulary, including prescription,preparation and administration modalities to reduce errors at different stages;? Presence of a clinical pharmacist in the NICU.Disclosure of Interest None Declared
Resumo:
It has been convincingly argued that computer simulation modeling differs from traditional science. If we understand simulation modeling as a new way of doing science, the manner in which scientists learn about the world through models must also be considered differently. This article examines how researchers learn about environmental processes through computer simulation modeling. Suggesting a conceptual framework anchored in a performative philosophical approach, we examine two modeling projects undertaken by research teams in England, both aiming to inform flood risk management. One of the modeling teams operated in the research wing of a consultancy firm, the other were university scientists taking part in an interdisciplinary project experimenting with public engagement. We found that in the first context the use of standardized software was critical to the process of improvisation, the obstacles emerging in the process concerned data and were resolved through exploiting affordances for generating, organizing, and combining scientific information in new ways. In the second context, an environmental competency group, obstacles were related to the computer program and affordances emerged in the combination of experience-based knowledge with the scientists' skill enabling a reconfiguration of the mathematical structure of the model, allowing the group to learn about local flooding.
Resumo:
OBJECTIVE: To quantify the relation between body mass index (BMI) and endometrial cancer risk, and to describe the shape of such a relation. DESIGN: Pooled analysis of three hospital-based case-control studies. SETTING: Italy and Switzerland. POPULATION: A total of 1449 women with endometrial cancer and 3811 controls. METHODS: Multivariate odds ratios (OR) and 95% confidence intervals (95% CI) were obtained from logistic regression models. The shape of the relation was determined using a class of flexible regression models. MAIN OUTCOME MEASURE: The relation of BMI with endometrial cancer. RESULTS: Compared with women with BMI 18.5 to <25 kg/m(2) , the odds ratio was 5.73 (95% CI 4.28-7.68) for women with a BMI ≥35 kg/m(2) . The odds ratios were 1.10 (95% CI 1.09-1.12) and 1.63 (95% CI 1.52-1.75) respectively for an increment of BMI of 1 and 5 units. The relation was stronger in never-users of oral contraceptives (OR 3.35, 95% CI 2.78-4.03, for BMI ≥30 versus <25 kg/m(2) ) than in users (OR 1.22, 95% CI 0.56-2.67), and in women with diabetes (OR 8.10, 95% CI 4.10-16.01, for BMI ≥30 versus <25 kg/m(2) ) than in those without diabetes (OR 2.95, 95% CI 2.44-3.56). The relation was best fitted by a cubic model, although after the exclusion of the 5% upper and lower tails, it was best fitted by a linear model. CONCLUSIONS: The results of this study confirm a role of elevated BMI in the aetiology of endometrial cancer and suggest that the risk in obese women increases in a cubic nonlinear fashion. The relation was stronger in never-users of oral contraceptives and in women with diabetes. TWEETABLE ABSTRACT: Risk of endometrial cancer increases with elevated body weight in a cubic nonlinear fashion.
Resumo:
Scientific curiosity, exploration of georesources and environmental concerns are pushing the geoscientific research community toward subsurface investigations of ever-increasing complexity. This review explores various approaches to formulate and solve inverse problems in ways that effectively integrate geological concepts with geophysical and hydrogeological data. Modern geostatistical simulation algorithms can produce multiple subsurface realizations that are in agreement with conceptual geological models and statistical rock physics can be used to map these realizations into physical properties that are sensed by the geophysical or hydrogeological data. The inverse problem consists of finding one or an ensemble of such subsurface realizations that are in agreement with the data. The most general inversion frameworks are presently often computationally intractable when applied to large-scale problems and it is necessary to better understand the implications of simplifying (1) the conceptual geological model (e.g., using model compression); (2) the physical forward problem (e.g., using proxy models); and (3) the algorithm used to solve the inverse problem (e.g., Markov chain Monte Carlo or local optimization methods) to reach practical and robust solutions given today's computer resources and knowledge. We also highlight the need to not only use geophysical and hydrogeological data for parameter estimation purposes, but also to use them to falsify or corroborate alternative geological scenarios.