929 resultados para 230107 Differential, Difference and Integral Equations
Resumo:
The goal of this thesis is the application of an opto-electronic numerical simulation to heterojunction silicon solar cells featuring an all back contact architecture (Interdigitated Back Contact Hetero-Junction IBC-HJ). The studied structure exhibits both metal contacts, emitter and base, at the back surface of the cell with the objective to reduce the optical losses due to the shadowing by front contact of conventional photovoltaic devices. Overall, IBC-HJ are promising low-cost alternatives to monocrystalline wafer-based solar cells featuring front and back contact schemes, in fact, for IBC-HJ the high concentration doping diffusions are replaced by low-temperature deposition processes of thin amorphous silicon layers. Furthermore, another advantage of IBC solar cells with reference to conventional architectures is the possibility to enable a low-cost assembling of photovoltaic modules, being all contacts on the same side. A preliminary extensive literature survey has been helpful to highlight the specific critical aspects of IBC-HJ solar cells as well as the state-of-the-art of their modeling, processing and performance of practical devices. In order to perform the analysis of IBC-HJ devices, a two-dimensional (2-D) numerical simulation flow has been set up. A commercial device simulator based on finite-difference method to solve numerically the whole set of equations governing the electrical transport in semiconductor materials (Sentuarus Device by Synopsys) has been adopted. The first activity carried out during this work has been the definition of a 2-D geometry corresponding to the simulation domain and the specification of the electrical and optical properties of materials. In order to calculate the main figures of merit of the investigated solar cells, the spatially resolved photon absorption rate map has been calculated by means of an optical simulator. Optical simulations have been performed by using two different methods depending upon the geometrical features of the front interface of the solar cell: the transfer matrix method (TMM) and the raytracing (RT). The first method allows to model light prop-agation by plane waves within one-dimensional spatial domains under the assumption of devices exhibiting stacks of parallel layers with planar interfaces. In addition, TMM is suitable for the simulation of thin multi-layer anti reflection coating layers for the reduction of the amount of reflected light at the front interface. Raytracing is required for three-dimensional optical simulations of upright pyramidal textured surfaces which are widely adopted to significantly reduce the reflection at the front surface. The optical generation profiles are interpolated onto the electrical grid adopted by the device simulator which solves the carriers transport equations coupled with Poisson and continuity equations in a self-consistent way. The main figures of merit are calculated by means of a postprocessing of the output data from device simulation. After the validation of the simulation methodology by means of comparison of the simulation result with literature data, the ultimate efficiency of the IBC-HJ architecture has been calculated. By accounting for all optical losses, IBC-HJ solar cells result in a theoretical maximum efficiency above 23.5% (without texturing at front interface) higher than that of both standard homojunction crystalline silicon (Homogeneous Emitter HE) and front contact heterojuction (Heterojunction with Intrinsic Thin layer HIT) solar cells. However it is clear that the criticalities of this structure are mainly due to the defects density and to the poor carriers transport mobility in the amorphous silicon layers. Lastly, the influence of the most critical geometrical and physical parameters on the main figures of merit have been investigated by applying the numerical simulation tool set-up during the first part of the present thesis. Simulations have highlighted that carrier mobility and defects level in amorphous silicon may lead to a potentially significant reduction of the conversion efficiency.
Resumo:
Recent advances in the fast growing area of therapeutic/diagnostic proteins and antibodies - novel and highly specific drugs - as well as the progress in the field of functional proteomics regarding the correlation between the aggregation of damaged proteins and (immuno) senescence or aging-related pathologies, underline the need for adequate analytical methods for the detection, separation, characterization and quantification of protein aggregates, regardless of the their origin or formation mechanism. Hollow fiber flow field-flow fractionation (HF5), the miniaturized version of FlowFFF and integral part of the Eclipse DUALTEC FFF separation system, was the focus of this research; this flow-based separation technique proved to be uniquely suited for the hydrodynamic size-based separation of proteins and protein aggregates in a very broad size and molecular weight (MW) range, often present at trace levels. HF5 has shown to be (a) highly selective in terms of protein diffusion coefficients, (b) versatile in terms of bio-compatible carrier solution choice, (c) able to preserve the biophysical properties/molecular conformation of the proteins/protein aggregates and (d) able to discriminate between different types of protein aggregates. Thanks to the miniaturization advantages and the online coupling with highly sensitive detection techniques (UV/Vis, intrinsic fluorescence and multi-angle light scattering), HF5 had very low detection/quantification limits for protein aggregates. Compared to size-exclusion chromatography (SEC), HF5 demonstrated superior selectivity and potential as orthogonal analytical method in the extended characterization assays, often required by therapeutic protein formulations. In addition, the developed HF5 methods have proven to be rapid, highly selective, sensitive and repeatable. HF5 was ideally suitable as first dimension of separation of aging-related protein aggregates from whole cell lysates (proteome pre-fractionation method) and, by HF5-(UV)-MALS online coupling, important biophysical information on the fractionated proteins and protein aggregates was gathered: size (rms radius and hydrodynamic radius), absolute MW and conformation.
Resumo:
Monte Carlo (MC) based dose calculations can compute dose distributions with an accuracy surpassing that of conventional algorithms used in radiotherapy, especially in regions of tissue inhomogeneities and surface discontinuities. The Swiss Monte Carlo Plan (SMCP) is a GUI-based framework for photon MC treatment planning (MCTP) interfaced to the Eclipse treatment planning system (TPS). As for any dose calculation algorithm, also the MCTP needs to be commissioned and validated before using the algorithm for clinical cases. Aim of this study is the investigation of a 6 MV beam for clinical situations within the framework of the SMCP. In this respect, all parts i.e. open fields and all the clinically available beam modifiers have to be configured so that the calculated dose distributions match the corresponding measurements. Dose distributions for the 6 MV beam were simulated in a water phantom using a phase space source above the beam modifiers. The VMC++ code was used for the radiation transport through the beam modifiers (jaws, wedges, block and multileaf collimator (MLC)) as well as for the calculation of the dose distributions within the phantom. The voxel size of the dose distributions was 2mm in all directions. The statistical uncertainty of the calculated dose distributions was below 0.4%. Simulated depth dose curves and dose profiles in terms of [Gy/MU] for static and dynamic fields were compared with the corresponding measurements using dose difference and γ analysis. For the dose difference criterion of ±1% of D(max) and the distance to agreement criterion of ±1 mm, the γ analysis showed an excellent agreement between measurements and simulations for all static open and MLC fields. The tuning of the density and the thickness for all hard wedges lead to an agreement with the corresponding measurements within 1% or 1mm. Similar results have been achieved for the block. For the validation of the tuned hard wedges, a very good agreement between calculated and measured dose distributions was achieved using a 1%/1mm criteria for the γ analysis. The calculated dose distributions of the enhanced dynamic wedges (10°, 15°, 20°, 25°, 30°, 45° and 60°) met the criteria of 1%/1mm when compared with the measurements for all situations considered. For the IMRT fields all compared measured dose values agreed with the calculated dose values within a 2% dose difference or within 1 mm distance. The SMCP has been successfully validated for a static and dynamic 6 MV photon beam, thus resulting in accurate dose calculations suitable for applications in clinical cases.
Resumo:
Cardiovascular disease (CVD) due to atherosclerosis of the arterial vessel wall and to thrombosis is the foremost cause of premature mortality and of disability-adjusted life years (DALYs) in Europe, and is also increasingly common in developing countries.1 In the European Union, the economic cost of CVD represents annually E192 billion1 in direct and indirect healthcare costs. The main clinical entities are coronary artery disease (CAD), ischaemic stroke, and peripheral arterial disease (PAD). The causes of these CVDs are multifactorial. Some of these factors relate to lifestyles, such as tobacco smoking, lack of physical activity, and dietary habits, and are thus modifiable. Other risk factors are also modifiable, such as elevated blood pressure, type 2 diabetes, and dyslipidaemias, or non-modifiable, such as age and male gender. These guidelines deal with the management of dyslipidaemias as an essential and integral part of CVD prevention. Prevention and treatment of dyslipidaemias should always be considered within the broader framework of CVD prevention, which is addressed in guidelines of the Joint European Societies’ Task forces on CVD prevention in clinical practice.2 – 5 The latest version of these guidelines was published in 20075; an update will become available in 2012. These Joint ESC/European Atherosclerosis Society (EAS) guidelines on the management of dyslipidaemias are complementary to the guidelines on CVD prevention in clinical practice and address not only physicians [e.g. general practitioners (GPs) and cardiologists] interested in CVD prevention, but also specialists from lipid clinics or metabolic units who are dealing with dyslipidaemias that are more difficult to classify and treat.
Resumo:
We report the case of a 28-year old woman with extensive red-black colored lesions of the skin on the left thigh, which appeared without trauma. The disease arrived during longterm coumarin therapy because of a deep vein thrombosis and an antiphospholipid syndrome. After consideration of the differential diagnoses and due to the typical clinical picture we made the diagnosis of coumarin necrosis. We review the clinical and therapeutic features for this rare complication.
Resumo:
For as far back as human history can be traced, mankind has questioned what it means to be human. One of the most common approaches throughout Western culture's intellectual tradition in attempts to answering this question has been to compare humans with or against other animals. I argue that it was not until Charles Darwin's publication of The Descent of Man and Selection in Relation to Sex (1871) that Western culture was forced to seriously consider human identity in relation to the human/ nonhuman primate line. Since no thinker prior to Charles Darwin had caused such an identity crisis in Western thought, this interdisciplinary analysis of the history of how the human/ nonhuman primate line has been understood focuses on the reciprocal relationship of popular culture and scientific representations from 1871 to the Human Genome Consortium in 2000. Focusing on the concept coined as the "Darwin-Müller debate," representations of the human/ nonhuman primate line are traced through themes of language, intelligence, and claims of variation throughout the popular texts: Descent of Man, The Jungle Books (1894), Tarzan of the Apes (1914), and Planet of the Apes (1963). Additional themes such as the nature versus nurture debate and other comparative phenotypic attributes commonly used for comparison between man and apes are also analyzed. Such popular culture representations are compared with related or influential scientific research during the respective time period of each text to shed light on the reciprocal nature of Western intellectual tradition, popular notions of the human/ nonhuman primate line, and the development of the field of primatology. Ultimately this thesis shows that the Darwin-Müller debate is indeterminable, and such a lack of resolution makes man uncomfortable. Man's unsettled response and desire for self-knowledge further facilitates a continued search for answers to human identity. As the Human Genome Project has led to the rise of new debates, and primate research has become less anthropocentric over time, the mysteries of man's future have become more concerning than the questions of our past. The human/ nonhuman primate line is reduced to a 1% difference, and new debates have begun to overshadow the Darwin-Müller debate. In conclusion, I argue that human identity is best represented through the metaphor of evolution: both have an unknown beginning, both have an indeterminable future with no definite end, and like a species under the influence of evolution, what it means to be human is a constant, indeterminable process of change.
Resumo:
Post-soviet countries are in the process of transformation from a totalitarian order to a democratic one, a transformation which is impossible without a profound shift in people's way of thinking. The group set themselves the task of determining the essence of this shift. Using a multidisciplinary approach, they looked at concrete ways of overcoming the totalitarian mentality and forming that necessary for an open democratic society. They studied the contemporary conceptions of tolerance and critical thinking and looked for new foundations of criticism, especially in hermeneutics. They then sought to substantiate the complementary relation between tolerance and criticism in the democratic way of thinking and to prepare a a syllabus for teaching on the subject in Ukrainian higher education. In a philosophical exploration of tolerance they began with relgious tolerance as its first and most important form. Political and social interests often lay at the foundations of religious intolerance and this implicitly comprised the transition to religious tolerance when conditions changed. Early polytheism was more or less indifferent to dogmatic deviations but monotheism is intolerant of heresies. The damage wrought by the religious wars of the Reformations transformed tolerance into a value. They did not create religious tolerance but forced its recognition as a positive phenomenon. With the weakening of religious institutions in the modern era, the purely political nature of many conflicts became evident and this stimulated the extrapolation of tolerance into secular life. Each historical era has certain acts and operations which may be interpreted as tolerant and these can be classified as to whether or not they are based on the conscious following of the principle of tolerance. This criterion requires the separation of the phenomenon of tolerance from its concept and from tolerance as a value. Only the conjunction of a concept of tolerance with a recognition of its value can transform it into a principle dictating a norm of conscious behaviour. The analysis of the contemporary conception of tolerance focused on the diversity of the concept and concluded that the notions used cannot be combined in the framework of a single more or less simple classification, as the distinctions between them are stimulated by the complexity of the realty considered and the variety of its manifestations. Notions considered in relation to tolerance included pluralism, respect and particular-universal. The rationale of tolerance was also investigated and the group felt that any substantiation of the principle of tolerance must take into account human beings' desire for knowledge. Before respecting or being tolerant of another person different from myself, I should first know where the difference lies, so knowledge is a necessary condition of tolerance.The traditional division of truth into scientific (objective and unique) and religious, moral, political (subjective and so multiple) intensifies the problem of the relationship between truth and tolerance. Science was long seen as a field of "natural" intolerance whereas the validity of tolerance was accepted in other intellectual fields. As tolerance eemrges when there is difference and opposition, it is essentially linked with rivaly and there is a a growing recognition today that unlimited rivalry is neither able to direct the process of development nor to act as creative matter. Social and economic reality has led to rivalry being regulated by the state and a natural requirement of this is to associate tolerance with a special "purified" form of rivalry, an acceptance of the actiivity of different subjects and a specification of the norms of their competition. Tolerance and rivalry should therefore be subordinate to a degree of discipline and the group point out that discipline, including self-discipline, is a regulator of the balance between them. Two problematic aspects of tolerance were identified: why something traditionally supposed to have no positive content has become a human activity today, and whether tolerance has full-scale cultural significance. The resolution of these questions requires a revision of the phenomenon and conception of tolerance to clarify its immanent positive content. This involved an investigation of the contemporary concept of tolerance and of the epistemological foundations of a negative solution of tolerance in Greek thought. An original soution to the problem of the extrapolation of tolerance to scientific knowledge was proposed based on the Duhem-Quine theses and conceptiion of background knowledge. In this way tolerance as a principle of mutual relations between different scientific positions gains an essential epistemological rationale and so an important argument for its own universal status. The group then went on to consider the ontological foundations for a positive solution of this problem, beginning with the work of Poincare and Reichenbach. The next aspect considered was the conceptual foundations of critical thinking, looking at the ideas of Karl Popper and St. Augustine and at the problem of the demarcation line between reasonable criticism and apologetic reasoning. Dogmatic and critical thinking in a political context were also considered, before an investigation of critical thinking's foundations. As logic is essential to critical thinking, the state of this discipline in Ukrainian and Russian higher education was assessed, together with the limits of formal-logical grounds for criticism, the role of informal logical as a basis for critical thinking today, dialectical logic as a foundation for critical thinking and the universality of the contemporary demand for criticism. The search for new foundations of critical thinking covered deconstructivism and critical hermeneutics, including the problem of the author. The relationship between tolerance and criticism was traced from the ancient world, both eastern and Greek, through the transitional community of the Renaissance to the industrial community (Locke and Mill) and the evolution of this relationship today when these are viewed not as moral virtues but as ordinary norms. Tolerance and criticism were discussed as complementary manifestations of human freedom. If the completeness of freedom were accepted it would be impossible to avoid recognition of the natural and legal nature of these manifestations and the group argue that critical tolerance is able to avoid dismissing such negative phenomena as the degradation of taste and manner, pornography, etc. On the basis of their work, the group drew up the syllabus of a course in "Logic with Elements of Critical Thinking, and of a special course on the "Problem of Tolerance".
Resumo:
BACKGROUND: Pulmonary inflammation after cardiac surgery with cardiopulmonary bypass (CPB) has been linked to respiratory dysfunction and ultrastructural injury. Whether pretreatment with methylprednisolone (MP) can preserve pulmonary surfactant and blood-air barrier, thereby improving pulmonary function, was tested in a porcine CPB-model. MATERIALS AND METHODS: After randomizing pigs to placebo (PLA; n = 5) or MP (30 mg/kg, MP; n = 5), animals were subjected to 3 h of CPB with 1 h of cardioplegic cardiac arrest. Hemodynamic data, plasma tumor necrosis factor-alpha (TNF-alpha, ELISA), and pulmonary function parameters were assessed before, 15 min after CPB, and 8 h after CPB. Lung biopsies were analyzed for TNF-alpha (Western blot) or blood-air barrier and surfactant morphology (electron microscopy, stereology). RESULTS: Systemic TNF-alpha increased and cardiac index decreased at 8 h after CPB in PLA (P < 0.05 versus pre-CPB), but not in MP (P < 0.05 versus PLA). In both groups, at 8 h after CPB, PaO(2) and PaO(2)/FiO(2) were decreased and arterio-alveolar oxygen difference and pulmonary vascular resistance were increased (P < 0.05 versus baseline). Postoperative pulmonary TNF-alpha remained unchanged in both groups, but tended to be higher in PLA (P = 0.06 versus MP). The volume fraction of inactivated intra-alveolar surfactant was increased in PLA (58 +/- 17% versus 83 +/- 6%) and MP (55 +/- 18% versus 80 +/- 17%) after CPB (P < 0.05 versus baseline for both groups). Profound blood-air barrier injury was present in both groups at 8 h as indicated by an increased blood-air barrier integrity score (PLA: 1.28 +/- 0.03 versus 1.70 +/- 0.1; MP: 1.27 +/- 0.08 versus 1.81 +/- 0.1; P < 0.05). CONCLUSION: Despite reduction of the systemic inflammatory response and pulmonary TNF-alpha generation, methylprednisolone fails to decrease pulmonary TNF-alpha and to preserve pulmonary surfactant morphology, blood-air barrier integrity, and pulmonary function after CPB.
Resumo:
BACKGROUND: Knowledge of how CFTR mutations other than F508del translate into the basic defect in cystic fibrosis (CF) is scarce due to the low incidence of homozygous index cases. METHODS: 17 individuals who are homozygous for deletions, missense, stop or splice site mutations in the CFTR gene were investigated for clinical symptoms of CF and assessed in CFTR function by sweat test, nasal potential difference and intestinal current measurement. RESULTS: CFTR activity in sweat gland, upper airways and distal intestine was normal for homozygous carriers of G314E or L997F and in the range of F508del homozygotes for homozygous carriers of E92K, W1098L, R553X, R1162X, CFTRdele2(ins186) or CFTRdele2,3(21 kb). Homozygotes for M1101K, 1898+3 A-G or 3849+10 kb C-T were not consistent CF or non-CF in the three bioassays. 14 individuals exhibited some chloride conductance in the airways and/or in the intestine which was identified by the differential response to cAMP and DIDS as being caused by CFTR or at least two other chloride conductances. DISCUSSION: CFTR mutations may lead to unusual electrophysiological or clinical manifestations. In vivo and ex vivo functional assessment of CFTR function and in-depth clinical examination of the index cases are indicated to classify yet uncharacterised CFTR mutations as either disease-causing lesions, risk factors, modifiers or neutral variants.
Resumo:
In an increasingly interconnected world characterized by the accelerating interplay of cultural, linguistic, and national difference, the ability to negotiate that difference in an equitable and ethical manner is a crucial skill for both individuals and larger social groups. This dissertation, Writing Center Handbooks and Travel Guidebooks: Redesigning Instructional Texts for Multicultural, Multilingual, and Multinational Contexts, considers how instructional texts that ostensibly support the negotiation of difference (i.e., accepting and learning from difference) actually promote the management of difference (i.e., rejecting, assimilating, and erasing difference). As a corrective to this focus on managing difference, chapter two constructs a theoretical framework that facilitates the redesign of handbooks, guidebooks, and similar instructional texts. This framework centers on reflexive design practices and is informed by literacy theory (Gee; New London Group; Street), social learning theory (Wenger), globalization theory (Nederveen Pieterse), and composition theory (Canagarajah; Horner and Trimbur; Lu; Matsuda; Pratt). By implementing reflexive design practices in the redesign of instructional texts, this dissertation argues that instructional texts can promote the negotiation of difference and a multicultural/multilingual sensibility that accounts for twenty-first century linguistic and cultural realities. Informed by the theoretical framework of chapter two, chapters three and four conduct a rhetorical analysis of two forms of instructional text that are representative of the larger genre: writing center coach handbooks and travel guidebooks to Hong Kong. This rhetorical analysis reveals how both forms of text employ rhetorical strategies that uphold dominant monolingual and monocultural assumptions. Alternative rhetorical strategies are then proposed that can be used to redesign these two forms of instructional texts in a manner that aligns with multicultural and multilingual assumptions. These chapters draw on the work of scholars in Writing Center Studies (Boquet and Lerner; Carino; DiPardo; Grimm; North; Severino) and Technical Communication (Barton and Barton; Dilger; Johnson; Kimball; Slack), respectively. Chapter five explores how the redesign of coach handbooks and travel guidebooks proposed in this dissertation can be conceptualized as a political act. Ultimately, this dissertation argues that instructional texts are powerful heuristic tools that can enact social change if they are redesigned to foster the negotiation of difference and to promote multicultural/multilingual world views.
Resumo:
As an important Civil Engineering material, asphalt concrete (AC) is commonly used to build road surfaces, airports, and parking lots. With traditional laboratory tests and theoretical equations, it is a challenge to fully understand such a random composite material. Based on the discrete element method (DEM), this research seeks to develop and implement computer models as research approaches for improving understandings of AC microstructure-based mechanics. In this research, three categories of approaches were developed or employed to simulate microstructures of AC materials, namely the randomly-generated models, the idealized models, and image-based models. The image-based models were recommended for accurately predicting AC performance, while the other models were recommended as research tools to obtain deep insight into the AC microstructure-based mechanics. A viscoelastic micromechanical model was developed to capture viscoelastic interactions within the AC microstructure. Four types of constitutive models were built to address the four categories of interactions within an AC specimen. Each of the constitutive models consists of three parts which represent three different interaction behaviors: a stiffness model (force-displace relation), a bonding model (shear and tensile strengths), and a slip model (frictional property). Three techniques were developed to reduce the computational time for AC viscoelastic simulations. It was found that the computational time was significantly reduced to days or hours from years or months for typical three-dimensional models. Dynamic modulus and creep stiffness tests were simulated and methodologies were developed to determine the viscoelastic parameters. It was found that the DE models could successfully predict dynamic modulus, phase angles, and creep stiffness in a wide range of frequencies, temperatures, and time spans. Mineral aggregate morphology characteristics (sphericity, orientation, and angularity) were studied to investigate their impacts on AC creep stiffness. It was found that aggregate characteristics significantly impact creep stiffness. Pavement responses and pavement-vehicle interactions were investigated by simulating pavement sections under a rolling wheel. It was found that wheel acceleration, steadily moving, and deceleration significantly impact contact forces. Additionally, summary and recommendations were provided in the last chapter and part of computer programming codes wree provided in the appendixes.
Resumo:
About one-sixth of the world’s land area, that is, about one-third of the land used for agriculture, has been affected by soil degradation in the historic past. While most of this damage was caused by water and wind erosion, other forms of soil degradation are induced by biological, chemical, and physical processes. Since the 1950s, pressure on agricultural land has increased considerably owing to population growth and agricultural modernization. Small-scale farming is the largest occupation in the world, involving over 2.5 billion people, over 70% of whom live below the poverty line. Soil erosion, along with other environmental threats, particularly affects these farmers by diminishing yields that are primarily used for subsistence. Soil and water conservation measures have been developed and applied on many farms. Local and science-based innovations are available for most agroecological conditions and land management and farming types. Principles and measures developed for small-scale as well as modern agricultural systems have begun to show positive impacts in most regions of the world, particularly in wealthier states and modern systems. Much more emphasis still needs to be given to small-scale farming, which requires external support for investment in sustainable land management technologies as an indispensable and integral component of farm activities.
Resumo:
The aim of this study was to determine the influence of individual factors on differences in bone mineral density (BMD) using dual X-ray absorptiometry pencil beam (PB) and fan beam (FB) modes in vivo and in vitro. PB.BMD and FB.BMD of 63 normal Caucasian females ages 21-80 yr were measured at the lumbar spine and hip. Residuals of the FB/PB regression were used to assess the impact of height, weight, adiposity index (AI) (= weight/height(3/2)), back tissue thickness, and PB.BMD, respectively, on FB/PB difference. The Hologic Anthropomorphic Spine Phantom (ASP) was measured using the PB and FB modes at two different levels to assess the impact of scanning mode and focus distance. The European Spine Phantom (ESP) prototype, a geometrically well-defined phantom with known vertebral densities, was measured using PB and FB modes and analyzed manually to determine the impact of bone density on FB/PB difference and automatically to determine the impact of edge detection on FB/PB difference. Population BMD results were perfectly correlated, but significantly overestimated by 1.5% at the lumbar spine and underestimated by 0.7% at the neck, 1.8% at the trochanter, and 2.0% at the total hip, respectively, when using the FB compared with PB mode. At the lumbar spine, the FB/PB residual correlated negatively with height (r = 0.34, p < 0.01) and PB.BMD (r = 0.48, p <: 0. 0001) and positively with AI (r = 0.26, p < 0.05). At the hip, residual of trochanter correlated positively with weight (r = 0.36, p < 0.01) and AI (r = 0.36, p < 0.01). The FB mode significantly increased ASP BMD by 0.7% compared with PB. Using the FB mode, increasing focus distance significantly (p < 0.001) decreased area and bone mineral content, but not BMD. By contrast, increasing focus distance significantly decreased PB.BMD by 0.7%. With the ESP, the PB mode supplied accurate projected are of the bone (AREA) results but significant underestimation of specified BMD in the manual analysis. The FB mode significantly underestimated PB. AREA by 2.9% but fitted specified BMD quite well. FB/PB overestimation was larger for the low-density (+8.7%) than for the high-density vertebra (+4. 9%). The automated analysis resulted in more than 14% underestimation of PB. AREA (low-density vertebra) and an almost 13% overestimation of PB.BMD (high-density vertebra) using FB. In conclusion, FB and PB measurements are highly correlated at the lumbar spine and hip with small but significant BMD differences related to height, adiposity, and BMD. In clinical practice, it can be erroneous to switch from one method to another, especially in women with low bone density.
Resumo:
Introduction: Nocturnal dreams can be considered as a kind of simulation of the real world on a higher cognitive level (Erlacher & Schredl, 2008). Within lucid dreams, the dreamer is aware of the dream state and thus able to control the ongoing dream content. Previous studies could demonstrate that it is possible to practice motor tasks during lucid dreams and doing so improved performance while awake (Erlacher & Schredl, 2010). Even though lucid dream practice might be a promising kind of cognitive rehearsal in sports, little is known about the characteristics of actions in lucid dreams. The purpose of the present study was to explore the relationship between time in dreams and wakefulness because in an earlier study (Erlacher & Schredl, 2004) we found that performing squads took lucid dreamers 44.5 % more time than in the waking state while for counting the same participants showed no differences between dreaming and wakefulness. To find out if the task modality, the task length or the task complexity require longer times in lucid dreams than in wakefulness three experiments were conducted. Methods: In the first experiment five proficient lucid dreamers spent two to three non-consecutive nights in the sleep laboratory with polysomnographic recording to control for REM sleep and determine eye signals. Participants counted from 1-10, 1-20 and 1-30 in wakefulness and in their lucid dreams. While dreaming they marked onset of lucidity as well as beginning and end of the counting task with a Left-Right-Left-Right eye movement and reported their dreams after being awakened. The same procedure was used for the second experiment with seven lucid dreamers except that they had to walk 10, 20 or 30 steps. In the third experiment nine participants performed an exercise involving gymnastics elements such as various jumps and a roll. To control for length of the task the gymnastic exercise in the waking state lasted about the same time as walking 10 steps. Results: As a general result we found – as in the study before – that performing a task in the lucid dream requires more time than in wakefulness. This tendency was found for all three tasks. However, there was no difference for the task modality (counting vs. motor task). Also the relative time for the different lengths of the tasks showed no difference. And finally, the more complex motor task (gymnastic routine) did not require more time in lucid dreams than the simple motor task. Discussion/Conclusion: The results showed that there is a robust effect of time in lucid dreams compared to wakefulness. The three experiments could not explain that those differences are caused by task modality, task length or task complexity. Therefore further possible candidates needs to be investigated e.g. experience in lucid dreaming or psychological variables. References: Erlacher, D. & Schredl, M. (2010). Practicing a motor task in a lucid dream enhances subsequent performance: A pilot study. The Sport Psychologist, 24(2), 157-167. Erlacher, D. & Schredl, M. (2008). Do REM (lucid) dreamed and executed actions share the same neural substrate? International Journal of Dream Research, 1(1), 7-13. Erlacher, D. & Schredl, M. (2004). Time required for motor activity in lucid dreams. Perceptual and Motor Skills, 99, 1239-1242.