906 resultados para Placoderm Scales
Resumo:
Objectives To determine the proportion of hip fracture patients who experience long-term disability and to re-estimate the resulting burden of disease associated with hip fractures in Australia in 2003. Methods A literature review of the functional outcome following a hip fracture (keywords: morbidity, treatment outcome, disability, quality of life, recovery of function, hip fractures, and femoral neck fractures) was carried out using PubMed and Ovid MEDLINE. Results A range of scales and outcome measures are used to evaluate recovery following a hip fracture. Based on the available evidence on restrictions in activities of daily living, 29% of hip fracture cases in the elderly do not reach their pre-fracture levels 1 year post-fracture. Those who do recover tend to reach their pre-fracture levels of functioning at around 6 months. These new assumptions result in 8251 years lived with disability for hip fractures in Australia in 2003, a 4.5-fold increase compared with the previous calculation based on Global Burden of Disease assumptions that only 5% of hip fractures lead to long-term disability and that the duration of short-term disability is just 51 days. Conclusions The original assumptions used in burden of disease studies grossly underestimate the long-term disability from hip fractures. The long-term consequences of other injuries may similarly have been underestimated and need to be re-examined. This has important implications for modelling the cost-effectiveness of preventive interventions where disability-adjusted life years are used as a measure of health outcome.
Resumo:
In an estuary, mixing and dispersion are the result of the combination of large scale advection and small scale turbulence which are both complex to estimate. A field study was conducted in a small sub-tropical estuary in which high frequency (50 Hz) turbulent data were recorded continuously for about 48 hours. A triple decomposition technique was introduced to isolate the contributions of tides, resonance and turbulence in the flow field. A striking feature of the data set was the slow fluctuations which exhibited large amplitudes up to 50% the tidal amplitude under neap tide conditions. The triple decomposition technique allowed a characterisation of broader temporal scales of high frequency fluctuation data sampled during a number of full tidal cycles.
Resumo:
Hydrogeophysics is a growing discipline that holds significant promise to help elucidate details of dynamic processes in the near surface, built on the ability of geophysical methods to measure properties from which hydrological and geochemical variables can be derived. For example, bulk electrical conductivity is governed by, amongst others, interstitial water content, fluid salinity, and temperature, and can be measured using a range of geophysical methods. In many cases, electrical resistivity tomography (ERT) is well suited to characterize these properties in multiple dimensions and to monitor dynamic processes, such as water infiltration and solute transport. In recent years, ERT has been used increasingly for ecosystem research in a wide range of settings; in particular to characterize vegetation-driven changes in root-zone and near-surface water dynamics. This increased popularity is due to operational factors (e.g., improved equipment, low site impact), data considerations (e.g., excellent repeatability), and the fact that ERT operates at scales significantly larger than traditional point sensors. Current limitations to a more widespread use of the approach include the high equipment costs, and the need for site-specific petrophysical relationships between properties of interest. In this presentation we will discuss recent equipment advances and theoretical and methodological aspects involved in the accurate estimation of soil moisture from ERT results. Examples will be presented from two studies in a temperate climate (Michigan, USA) and one from a humid tropical location (Tapajos, Brazil).
Resumo:
Hydraulic conductivity (K) fields are used to parameterize groundwater flow and transport models. Numerical simulations require a detailed representation of the K field, synthesized to interpolate between available data. Several recent studies introduced high-resolution K data (HRK) at the Macro Dispersion Experiment (MADE) site, and used ground-penetrating radar (GPR) to delineate the main structural features of the aquifer. This paper describes a statistical analysis of these data, and the implications for K field modeling in alluvial aquifers. Two striking observations have emerged from this analysis. The first is that a simple fractional difference filter can have a profound effect on data histograms, organizing non-Gaussian ln K data into a coherent distribution. The second is that using GPR facies allows us to reproduce the significantly non-Gaussian shape seen in real HRK data profiles, using a simulated Gaussian ln K field in each facies. This illuminates a current controversy in the literature, between those who favor Gaussian ln K models, and those who observe non-Gaussian ln K fields. Both camps are correct, but at different scales.
Resumo:
Changes in global climate and land use affect important prolesses from evapotranspiration and groundwater recharge to carbon storage and biochemical cycling. Near surface soil moisture is pivotal to understand the consequences of these changes. However, the dynamic interactions between vegetation and soil moisture remain largely unresolved because it is difficult to monitor and quantify subsurface hydrologic fluxes at relevant scales. Here we use electrical resistivity to monitor the influence of climate and vegetation on root-zone moisture, bridging the gap between remotely-sensed and in-situ point measurements. Our research quantifies large seasonal differences in root-zone moisture dynamics for a forest-grassland ecotone. We found large differences in effective rooting depth and moisture distributions for the two vegetation types. Our results highlight the likely impacts of land transformations on groun ter recharge, streamflow, and land-atmosphere exchanges.
Resumo:
Conservation planning and management programs typically assume relatively homogeneous ecological landscapes. Such “ecoregions” serve multiple purposes: they support assessments of competing environmental values, reveal priorities for allocating scarce resources, and guide effective on-ground actions such as the acquisition of a protected area and habitat restoration. Ecoregions have evolved from a history of organism–environment interactions, and are delineated at the scale or level of detail required to support planning. Depending on the delineation method, scale, or purpose, they have been described as provinces, zones, systems, land units, classes, facets, domains, subregions, and ecological, biological, biogeographical, or environmental regions. In each case, they are essential to the development of conservation strategies and are embedded in government policies at multiple scales.
Resumo:
Deep geothermal from the hot crystalline basement has remained an unsolved frontier for the geothermal industry for the past 30 years. This poses the challenge for developing a new unconventional geomechanics approach to stimulate such reservoirs. While a number of new unconventional brittle techniques are still available to improve stimulation on short time scales, the astonishing richness of failure modes of longer time scales in hot rocks has so far been overlooked. These failure modes represent a series of microscopic processes: brittle microfracturing prevails at low temperatures and fairly high deviatoric stresses, while upon increasing temperature and decreasing applied stress or longer time scales, the failure modes switch to transgranular and intergranular creep fractures. Accordingly, fluids play an active role and create their own pathways through facilitating shear localization by a process of time-dependent dissolution and precipitation creep, rather than being a passive constituent by simply following brittle fractures that are generated inside a shear zone caused by other localization mechanisms. We lay out a new theoretical approach for the design of new strategies to utilize, enhance and maintain the natural permeability in the deeper and hotter domain of geothermal reservoirs. The advantage of the approach is that, rather than engineering an entirely new EGS reservoir, we acknowledge a suite of creep-assisted geological processes that are driven by the current tectonic stress field. Such processes are particularly supported by higher temperatures potentially allowing in the future to target commercially viable combinations of temperatures and flow rates.
Resumo:
Insomnia is a pervasive problem involving poor sleep quality and quantity. Previous research has suggested that music listening can help alleviate insomnia, but exactly how music helps sleep problems has not been determined. A greater understanding of these processes could help practitioners to design more effective music-based insomnia treatments. This randomised controlled trial was designed to assess the influences of nightly music listening on the sleep-related thoughts and behaviours described in Harvey’s (2002) cognitive model of insomnia maintenance. University students, including a range of good and poor sleepers, were randomly assigned to a music listening group or a control group and were assessed before and after a two-week music listening intervention. Measures included a range of self-report scales, each assessing an element of Harvey’s cognitive model. During the intervention, the music listening group was asked to listen to provided music for at least 20 minutes each night. The control group was asked to maintain their regular nightly routines. Results indicated that the music listening group significantly improved on most of the factors theorised to influence sleep quality, although their actual sleep quality did not significantly improve. The control group did not change significantly on any measures. The results of this study suggest that music listening can have positive impacts on a range of factors theorised to influence sleep quality. However, as the music was not shown to actually improve sleep quality, Harvey’s cognitive model explanation of music’s effect on sleep quality may require further investigation.
Resumo:
Polymethacrylate monoliths, specifically poly(glycidyl methacrylate-co-ethylene dimethacrylate) or poly(GMA-co-EDMA) monoliths, are a new generation of chromatographic supports and are significantly different from conventional particle-based adsorbents, membranes, and other monolithic supports for biomolecule purification. Similar to other monoliths, polymethacrylate monoliths possess large pores which allow convective flow of mobile phase and result in high flow rates at reduced pressure drop, unlike particulate supports. The simplicity of the adsorbent synthesis, pH resistance, and the ease and flexibility of tailoring their pore size to that of the target biomolecule are the key properties which differentiate polymethacrylate monoliths from other monoliths. Polymethacrylate monoliths are endowed with reactive epoxy groups for easy functionalization (with anion-exchange, hydrophobic, and affinity ligands) and high ligand retention. In this review, the structure and performance of polymethacrylate monoliths for chromatographic purification of biomolecules are evaluated and compared to those of other supports. The development and use of polymethacrylate monoliths for research applications have grown rapidly in recent times and have enabled the achievement of high through-put biomolecule purification on semi-preparative and preparative scales.
Resumo:
The technique of photo-CELIV (charge extraction by linearly increasing voltage) is one of the more straightforward and popular approaches to measure the faster carrier mobility in measurement geometries that are relevant for operational solar cells and other optoelectronic devices. It has been used to demonstrate a time-dependent photocarrier mobility in pristine polymers, attributed to energetic relaxation within the density of states. Conversely, in solar cell blends, the presence or absence of such energetic relaxation on transport timescales remains under debate. We developed a complete numerical model and performed photo-CELIV experiments on the model high efficiency organic solar cell blend poly[3,6-dithiophene-2-yl-2,5-di(2-octyldodecyl)-pyrrolo[3,4-c]pyrrole-1,4-dione-alt-naphthalene] (PDPP-TNT):[6,6]-phenyl-C71-butyric-acid-methyl-ester (PC70BM). In the studied solar cells a constant, time-independent mobility on the scale relevant to charge extraction was observed, where thermalisation of photocarriers occurs on time scales much shorter than the transit time. Therefore, photocarrier relaxation effects are insignificant for charge transport in these efficient photovoltaic devices.
Resumo:
The trans-activator of transcription (TAT) peptide is regarded as the “gold standard” for cell-penetrating peptides, capable of traversing a mammalian membrane passively into the cytosolic space. This characteristic has been exploited through conjugation of TAT for applications such as drug delivery. However, the process by which TAT achieves membrane penetration remains ambiguous and unresolved. Mechanistic details of TAT peptide action are revealed herein by using three complementary methods: quartz crystal microbalance with dissipation (QCM-D), scanning electrochemical microscopy (SECM) and atomic force microscopy (AFM). When combined, these three scales of measurement define that the membrane uptake of the TAT peptide is by trans-membrane insertion using a “worm-hole” pore that leads to ion permeability across the membrane layer. AFM data provided nanometre-scale visualisation of TAT punctuation using a mammalian-mimetic membrane bilayer. The TAT peptide does not show the same specificity towards a bacterial mimetic membrane and QCM-D and SECM showed that the TAT peptide demonstrates a disruptive action towards these membranes. This investigation supports the energy-independent uptake of the cationic TAT peptide and provides empirical data that clarify the mechanism by which the TAT peptide achieves its membrane activity. The novel use of these three biophysical techniques provides valuable insight into the mechanism for TAT peptide translocation, which is essential for improvements in the cellular delivery of TAT-conjugated cargoes including therapeutic agents required to target specific intracellular locations.
Resumo:
PURPOSE: This paper describes dynamic agent composition, used to support the development of flexible and extensible large-scale agent-based models (ABMs). This approach was motivated by a need to extend and modify, with ease, an ABM with an underlying networked structure as more information becomes available. Flexibility was also sought after so that simulations are set up with ease, without the need to program. METHODS: The dynamic agent composition approach consists in having agents, whose implementation has been broken into atomic units, come together at runtime to form the complex system representation on which simulations are run. These components capture information at a fine level of detail and provide a vast range of combinations and options for a modeller to create ABMs. RESULTS: A description of the dynamic agent composition is given in this paper, as well as details about its implementation within MODAM (MODular Agent-based Model), a software framework which is applied to the planning of the electricity distribution network. Illustrations of the implementation of the dynamic agent composition are consequently given for that domain throughout the paper. It is however expected that this approach will be beneficial to other problem domains, especially those with a networked structure, such as water or gas networks. CONCLUSIONS: Dynamic agent composition has many advantages over the way agent-based models are traditionally built for the users, the developers, as well as for agent-based modelling as a scientific approach. Developers can extend the model without the need to access or modify previously written code; they can develop groups of entities independently and add them to those already defined to extend the model. Users can mix-and-match already implemented components to form large-scales ABMs, allowing them to quickly setup simulations and easily compare scenarios without the need to program. The dynamic agent composition provides a natural simulation space over which ABMs of networked structures are represented, facilitating their implementation; and verification and validation of models is facilitated by quickly setting up alternative simulations.
Resumo:
Biomedical systems involve a large number of entities and intricate interactions between these. Their direct analysis is, therefore, difficult, and it is often necessary to rely on computational models. These models require significant resources and parallel computing solutions. These approaches are particularly suited, given parallel aspects in the nature of biomedical systems. Model hybridisation also permits the integration and simultaneous study of multiple aspects and scales of these systems, thus providing an efficient platform for multidisciplinary research.
Resumo:
Epigenetic changes correspond to heritable modifications of the chromosome structure, which do not involve alteration of the DNA sequence but do affect gene expression. These mechanisms play an important role in normal cell differentiation, but aberration is associated also with several diseases, including cancer and neural disorders. In consequence, despite intensive studies in recent years, the contribution of modifications remains largely unquantified due to overall system complexity and insufficient data. Computational models can provide powerful auxiliary tools to experimentation, not least as scales from the sub-cellular through cell populations (or to networks of genes) can be spanned. In this paper, the challenges to development, of realistic cross-scale models, are discussed and illustrated with respect to current work.
Resumo:
This design-based research project addresses the gap between formal music education curricula and the knowledge and skills necessary to enter the professional music industry. It analyses the work of a teacher/researcher who invited her high school students to start their own business venture, Youth Music Industries (YMI). YMI also functioned as a learning environment informed by the theoretical concepts of communities of practice and social capital. The students staged cycles of events of various scales over a three-year period, as platforms for young artists to engage and develop new, young audiences across Queensland, Australia. The study found that students developed an entrepreneurial mindset through acquisition of specific skills and knowledge. Their learning was captured and distilled into a set of design principles, a pedagogical approach transferrable across the creative industries more broadly.