398 resultados para LEVEL VARIATIONS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Organ printing techniques offer the potential to produce living 3D tissue constructs to repair or replace damaged or diseased human tissues and organs. Using these techniques, spatial variations along multiple axes with high geometric complexity can be obtained.. The level of control offered by these technologies to develop printed tissues will allow tissue engineers to better study factors that modulate tissue formation and function, and provide a valuable tool to study the effect of anatomy on graft performance. In this chapter we discuss the history behind substrate patterning and cell and organ printing, and the rationale for developing organ printing techniques with respect to limitations of current clinical tissue engineering strategies to effectively repair damaged tissues. We discuss current 2-dimensional and 3-dimesional strategies for assembling cells as well as the necessary support materials such as hydrogels, bioinks and natural and synthetic polymers adopted for organ printing research. Furthermore, given the current state-of-the-art in organ printing technologies, we discuss some of their limitations and provide recommendations for future developments in this rapidly growing field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose–The purpose of this paper is to formulate a conceptual framework for urban sustainability indicators selection. This framework will be used to develop an indicator-based evaluation method for assessing the sustainability levels of residential neighbourhood developments in Malaysia. Design/methodology/approach–We provide a brief overview of existing evaluation frameworks for sustainable development assessment. We then develop a conceptual Sustainable Residential Neighbourhood Assessment (SNA) framework utilising a four-pillar sustainability framework (environmental, social, economic and institutional) and a combination of domain-based and goal-based general frameworks. This merger offers the advantages of both individual frameworks, while also overcoming some of their weaknesses when used to develop the urban sustainability evaluation method for assessing residential neighbourhoods. Originality/value–This approach puts in evidence that many of the existing frameworks for evaluating urban sustainability do not extend their frameworks to include assessing housing sustainability at a local level. Practical implications–It is expected that the use of the indicator-based Sustainable Neighbourhood Assessment framework will present a potential mechanism for planners and developers to evaluate and monitor the sustainability performance of residential neighbourhood developments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Level crossing crashes have been shown to result in enormous human and financial cost to society. According to the Australian Transport Safety Bureau (ATSB) [5] a total of 632 Railway Level crossing (RLX) collisions, between trains and road vehicles, occurred in Australia between 2001 and June 2009. The cost of RLX collisions runs into the tens of millions of dollars each year in Australia [6]. In addition, loss of life and injury are commonplace in instances where collisions occur. Based on estimates that 40% of rail related fatalities occur at level crossings [12], it is estimated that 142 deaths between 2001 and June 2009 occurred at RLX. The aim of this paper is to (i) summarise crash patterns in Australia, (ii) review existing international ITS interventions to improve level crossing and (iii) highlights open human factors research related issues. Human factors (e.g., driver error, lapses or violations) have been evidenced as a significant contributing factor in RLX collisions, with drivers of road vehicles particularly responsible for many collisions. Unintentional errors have been found to contribute to 46% of RLX collisions [6] and appear to be far more commonplace than deliberate violations. Humans have been found to be inherently inadequate at using the sensory information available to them to facilitate safe decision-making at RLX and tend to underestimate the speed of approaching large objects due to the non-linear increases in perceived size [6]. Collisions resulting from misjudgements of train approach speed and distance are common [20]. Thus, a fundamental goal for improved RLX safety is the provision of sufficient contextual information to road vehicle drivers to facilitate safe decision-making regarding crossing behaviours.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Impedance cardiography is an application of bioimpedance analysis primarily used in a research setting to determine cardiac output. It is a non invasive technique that measures the change in the impedance of the thorax which is attributed to the ejection of a volume of blood from the heart. The cardiac output is calculated from the measured impedance using the parallel conductor theory and a constant value for the resistivity of blood. However, the resistivity of blood has been shown to be velocity dependent due to changes in the orientation of red blood cells induced by changing shear forces during flow. The overall goal of this thesis was to study the effect that flow deviations have on the electrical impedance of blood, both experimentally and theoretically, and to apply the results to a clinical setting. The resistivity of stationary blood is isotropic as the red blood cells are randomly orientated due to Brownian motion. In the case of blood flowing through rigid tubes, the resistivity is anisotropic due to the biconcave discoidal shape and orientation of the cells. The generation of shear forces across the width of the tube during flow causes the cells to align with the minimal cross sectional area facing the direction of flow. This is in order to minimise the shear stress experienced by the cells. This in turn results in a larger cross sectional area of plasma and a reduction in the resistivity of the blood as the flow increases. Understanding the contribution of this effect on the thoracic impedance change is a vital step in achieving clinical acceptance of impedance cardiography. Published literature investigates the resistivity variations for constant blood flow. In this case, the shear forces are constant and the impedance remains constant during flow at a magnitude which is less than that for stationary blood. The research presented in this thesis, however, investigates the variations in resistivity of blood during pulsataile flow through rigid tubes and the relationship between impedance, velocity and acceleration. Using rigid tubes isolates the impedance change to variations associated with changes in cell orientation only. The implications of red blood cell orientation changes for clinical impedance cardiography were also explored. This was achieved through measurement and analysis of the experimental impedance of pulsatile blood flowing through rigid tubes in a mock circulatory system. A novel theoretical model including cell orientation dynamics was developed for the impedance of pulsatile blood through rigid tubes. The impedance of flowing blood was theoretically calculated using analytical methods for flow through straight tubes and the numerical Lattice Boltzmann method for flow through complex geometries such as aortic valve stenosis. The result of the analytical theoretical model was compared to the experimental impedance measurements through rigid tubes. The impedance calculated for flow through a stenosis using the Lattice Boltzmann method provides results for comparison with impedance cardiography measurements collected as part of a pilot clinical trial to assess the suitability of using bioimpedance techniques to assess the presence of aortic stenosis. The experimental and theoretical impedance of blood was shown to inversely follow the blood velocity during pulsatile flow with a correlation of -0.72 and -0.74 respectively. The results for both the experimental and theoretical investigations demonstrate that the acceleration of the blood is an important factor in determining the impedance, in addition to the velocity. During acceleration, the relationship between impedance and velocity is linear (r2 = 0.98, experimental and r2 = 0.94, theoretical). The relationship between the impedance and velocity during the deceleration phase is characterised by a time decay constant, ô , ranging from 10 to 50 s. The high level of agreement between the experimental and theoretically modelled impedance demonstrates the accuracy of the model developed here. An increase in the haematocrit of the blood resulted in an increase in the magnitude of the impedance change due to changes in the orientation of red blood cells. The time decay constant was shown to decrease linearly with the haematocrit for both experimental and theoretical results, although the slope of this decrease was larger in the experimental case. The radius of the tube influences the experimental and theoretical impedance given the same velocity of flow. However, when the velocity was divided by the radius of the tube (labelled the reduced average velocity) the impedance response was the same for two experimental tubes with equivalent reduced average velocity but with different radii. The temperature of the blood was also shown to affect the impedance with the impedance decreasing as the temperature increased. These results are the first published for the impedance of pulsatile blood. The experimental impedance change measured orthogonal to the direction of flow is in the opposite direction to that measured in the direction of flow. These results indicate that the impedance of blood flowing through rigid cylindrical tubes is axisymmetric along the radius. This has not previously been verified experimentally. Time frequency analysis of the experimental results demonstrated that the measured impedance contains the same frequency components occuring at the same time point in the cycle as the velocity signal contains. This suggests that the impedance contains many of the fluctuations of the velocity signal. Application of a theoretical steady flow model to pulsatile flow presented here has verified that the steady flow model is not adequate in calculating the impedance of pulsatile blood flow. The success of the new theoretical model over the steady flow model demonstrates that the velocity profile is important in determining the impedance of pulsatile blood. The clinical application of the impedance of blood flow through a stenosis was theoretically modelled using the Lattice Boltzman method (LBM) for fluid flow through complex geometeries. The impedance of blood exiting a narrow orifice was calculated for varying degrees of stenosis. Clincial impedance cardiography measurements were also recorded for both aortic valvular stenosis patients (n = 4) and control subjects (n = 4) with structurally normal hearts. This pilot trial was used to corroborate the results of the LBM. Results from both investigations showed that the decay time constant for impedance has potential in the assessment of aortic valve stenosis. In the theoretically modelled case (LBM results), the decay time constant increased with an increase in the degree of stenosis. The clinical results also showed a statistically significant difference in time decay constant between control and test subjects (P = 0.03). The time decay constant calculated for test subjects (ô = 180 - 250 s) is consistently larger than that determined for control subjects (ô = 50 - 130 s). This difference is thought to be due to difference in the orientation response of the cells as blood flows through the stenosis. Such a non-invasive technique using the time decay constant for screening of aortic stenosis provides additional information to that currently given by impedance cardiography techniques and improves the value of the device to practitioners. However, the results still need to be verified in a larger study. While impedance cardiography has not been widely adopted clinically, it is research such as this that will enable future acceptance of the method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is recognized that, in general, the performance of construction projects does not meet optimal expectations. One aspect of this is the performance of each participant, which is interdependent and makes a significance impact on overall project outcomes. Of these, the client is traditionally the owner of the project, the architect or engineer is engaged as the lead designer and a contractor is selected to construct the facilities. Generally, the performance of the participants is gauged by considering three main factors, namely time, cost and quality. As the level of satisfaction is a subjective measurement, it is rarely used in the performance evaluation of construction work. Recently, various approaches to the measurement of satisfaction have been made in attempting to determine the performance of construction project outcomes – for instance client satisfaction, consultant satisfaction, contractor satisfaction, customer satisfaction and home buyer satisfaction. These not only identify the performance of the construction project, but are also used to improve and maintain relationships. In addition, these assessments are necessary for continuous improvement and enhanced cooperation between participants. The measurement of satisfaction levels primarily involves expectations and perceptions. An expectation can be regarded as a comparison standard of different needs, motives and beliefs, while a perception is a subjective interpretation that is influenced by moods, experiences and values. This suggests that the disparity between perceptions and expectations may be used to represent different levels of satisfaction. However, this concept is rather new and in need of further investigation. This paper examines the current methods commonly practiced in measuring satisfaction level and the advantages of promoting these methods. The results provided are a preliminary review of the advantages of satisfaction measurement in the construction industry and recommendations are made concerning the most appropriate methods for use in identifying the performance of project outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The outcomes of the construction projects can be evaluated in numerous ways. One method is to measure the satisfaction of participants as represented by the differences between their expectations and perceptions. This measurement is used widely in construction as it promises benefits, such as the improvement of product delivery, and enhances services quality by identifying some necessary changes. Commonly satisfaction measurement is gauged by evaluating the level of client satisfaction of construction performance. The measurement of customer satisfaction on the other hand, is based on the quality of the end product. This evaluation is used to encourage contractors to improve their performance to a required level and to ensure that the projects are delivered as expected- in terms of time, budget and quality. Several studies of performance measurement have indicated that contractor performance is still not satisfactory, as the outcome delivered is not as required (because of cost overruns, time overruns or because it is generally unsatisfactory). This drawback may be due to the contractors’ lack of expertise, motivation and/or satisfaction. The measurement of performance based on contractor satisfaction levels is still new and very few studies have yet taken place in the construction industry. This paper examines how the characteristics of a contracting organisation – namely its experience in the industry, background, past performance, size of organisation and financial stability- may influence its satisfaction levels with regards to project performance. Previous literature reviews and interviews are used as research tools in the preliminary investigation. The outcome is expected to present a basic understanding of contractor satisfaction measurement and its potential for improving the performance of project outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the past twenty years, the conventional knowledge management approach has evolved into a strategic management approach that has found applications and opportunities outside of business, in society at large, through education, urban development, governance, and healthcare, amongst others. Knowledge-Based Development for Cities and Socieities: Integrated Multi-Level Approaches enlightens the concepts and challenges of knowledge management for both urban environments and entire regions, enhancing the expertise and knowledge of scholars, resdearchers, practitioners, managers and urban developers in the development of successful knowledge-based development policies, creation of knowledte cities and prosperous knowledge societies. This reference creates large knowledge base for scholars, managers and urban developers and increases the awareness of the role of knowledge cities and knowledge socieiteis in the knowledge era, as well as of the challenges and opportunities for future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Granadilla eruption at 600 ka was one of the largest phonolitic explosive eruptions from the Las Cañadas volcano on Tenerife, producing a classical plinian eruptive sequence of a widespread pumice fall deposit overlain by an ignimbrite. The eruption resulted in a major phase of caldera collapse that probably destroyed the shallow-level magma chamber system. Granadilla pumices contain a diverse phenocryst assemblage of alkali feldspar + biotite + sodian diopside to aegirine–augite + titanomagnetite + ilmenite + nosean/haüyne + titanite + apatite; alkali feldspar is the dominant phenocryst and biotite is the main ferromagnesian phase. Kaersutite and partially resorbed plagioclase (oligoclase to sodic andesine) are present in some eruptive units, particularly in pumice erupted during the early plinian phase, and in the Granadilla ignimbrite at the top of the sequence. Associated with the kaersutite and plagioclase are small clots of microlitic plagioclase and kaersutite interpreted as quenched blebs of tephriphonolitic magma within the phonolite pumice. The Granadilla Member has previously been recognized as an example of reverse-then-normal compositional zonation, where the zonation is primarily expressed in terms of substantial variations in trace element abundances with limited major element variation (cryptic zonation). Evidence for cryptic zonation is also provided by the chemistry of the phenocryst phases, and corresponding changes in intensive parameters (e.g. T, f O2, f  H2O). Geothermometry estimates indicate that the main body of phonolite magma had a temperature gradient from 860 °C to ∼790 °C, with hotter magma (≥900 °C) tapped at the onset and terminal phases of the eruption. The reverse-then-normal chemical and thermal zonation reflects the initial tapping of a partially hybridized magma (mixing of phonolite and tephriphonolite), followed by the more sequential tapping of a zoned and relatively large body of highly evolved phonolite at a new vent and during the main plinian phase. This suggests that the different magma types within the main holding chamber could have been laterally juxtaposed, as well as in a density-stratified arrangement. Correlations between the presence of mixed phenocryst populations (i.e. presence of plagioclase and kaersutite) and coarser pumice fall layers suggest that increased eruption vigour led to the tapping of hybridized and/or less evolved magma probably from greater depths in the chamber. New oxygen isotope data for glass and mineral separates preclude syn-eruptive interaction between the vesiculating magma and hydrothermal fluids as the cause of the Sr isotope disequilibrium identified previously for the deposit. Enrichment in radiogenic Sr in the pumice glass has more likely been due to low-temperature exchange with meteoric water that was enriched in 87Sr by sea spray, which may be a common process affecting porous and glassy pyroclastic deposits on oceanic islands.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article discusses the interaction between original and adaptation in the fashion system; the study also analyses, at a micro level, practices of adaptation adopted by consumers when making and re-making fashionable clothes. The article shows that the distinction between original and copy is historically determined as it grew out of the romantic notion of the authentic work of art. This article suggests that, in the impossibility to determine copyright in fashion, adaptation is a better descriptor of practices that transform garments; the concept of adaptation also abolishes trite notions of fashion as pastiche or bricolage, arguing for as a way to look at the many variations and re-contextualisations of garments historically and cross-culturally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

After bone fracture, various cellular activities lead to the formation of different tissue types, which form the basis for the process of secondary bone healing. Although these tissues have been quantified by histology, their material properties are not well understood. Thus, the aim of this study is to correlate the spatial and temporal variations in the mineral content and the nanoindentation modulus of the callus formed via intramembranous ossification over the course of bone healing. Midshaft tibial samples from a sheep osteotomy model at time points of 2, 3, 6 and 9 weeks were employed. PMMA embedded blocks were used for quantitative back scattered electron imaging and nanoindentation of the newly formed periosteal callus near the cortex. The resulting indentation modulus maps show the heterogeneity in the modulus in the selected regions of the callus. The indentation modulus of the embedded callus is about 6 GPa at the early stage. At later stages of mineralization, the average indentation modulus reaches 14 GPa. There is a slight decrease in average indentation modulus in regions distant to the cortex, probably due to remodelling of the peripheral callus. The spatial and temporal distribution of mineral content in the callus tissue also illustrates the ongoing remodelling process observed from histological analysis. Most interestingly the average indentation modulus, even at 9 weeks, remains as low as 13 GPa, which is roughly 60% of that for cortical sheep bone. The decreased indentation modulus in the callus compared to cortex is due to the lower average mineral content and may be perhaps also due to the properties of the organic matrix which might be different from normal bone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses major obstacles for the adoption of low cost level crossing warning devices (LCLCWDs) in Australia and reviews those trialed in Australia and internationally. The argument for the use of LCLCWDs is that for a given investment, more passive level crossings can be treated, therefore increasing safety benefits across the rail network. This approach, in theory, reduces risk across the network by utilizing a combination of low-cost and conventional level crossing interventions, similar to what is done in the road environment. This paper concludes that in order to determine if this approach can produce better safety outcomes than the current approach, involving the incremental upgrade of level crossings with conventional interventions, it is necessary to perform rigorous risk assessments and cost-benefit analyses of LCLCWDs. Further research is also needed to determine how best to differentiate less reliable LCCLWDs from conventional warning devices through the use of different warning signs and signals. This paper presents a strategy for progressing research and development of LCLCWDs and details how the Cooperative Research Centre (CRC) for Rail Innovation is fulfilling this strategy through the current and future affordable level crossing projects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In today’s electronic world vast amounts of knowledge is stored within many datasets and databases. Often the default format of this data means that the knowledge within is not immediately accessible, but rather has to be mined and extracted. This requires automated tools and they need to be effective and efficient. Association rule mining is one approach to obtaining knowledge stored with datasets / databases which includes frequent patterns and association rules between the items / attributes of a dataset with varying levels of strength. However, this is also association rule mining’s downside; the number of rules that can be found is usually very big. In order to effectively use the association rules (and the knowledge within) the number of rules needs to be kept manageable, thus it is necessary to have a method to reduce the number of association rules. However, we do not want to lose knowledge through this process. Thus the idea of non-redundant association rule mining was born. A second issue with association rule mining is determining which ones are interesting. The standard approach has been to use support and confidence. But they have their limitations. Approaches which use information about the dataset’s structure to measure association rules are limited, but could yield useful association rules if tapped. Finally, while it is important to be able to get interesting association rules from a dataset in a manageable size, it is equally as important to be able to apply them in a practical way, where the knowledge they contain can be taken advantage of. Association rules show items / attributes that appear together frequently. Recommendation systems also look at patterns and items / attributes that occur together frequently in order to make a recommendation to a person. It should therefore be possible to bring the two together. In this thesis we look at these three issues and propose approaches to help. For discovering non-redundant rules we propose enhanced approaches to rule mining in multi-level datasets that will allow hierarchically redundant association rules to be identified and removed, without information loss. When it comes to discovering interesting association rules based on the dataset’s structure we propose three measures for use in multi-level datasets. Lastly, we propose and demonstrate an approach that allows for association rules to be practically and effectively used in a recommender system, while at the same time improving the recommender system’s performance. This especially becomes evident when looking at the user cold-start problem for a recommender system. In fact our proposal helps to solve this serious problem facing recommender systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Climate change effects are expected to substantially raise the average sea level. It is widely assumed that this raise will have a severe adverse impact on saltwater intrusion processes in coastal aquifers. In this study we hypothesize that a natural mechanism, identified as the “lifting process” has the potential to mitigate or in some cases completely reverse the adverse intrusion effects induced by sea-level rise. A detailed numerical study using the MODFLOW-family computer code SEAWAT, was completed to test this hypothesis and to understand the effects of this lifting process in both confined and unconfined systems. Our conceptual simulation results show that if the ambient recharge remains constant, the sea-level rise will have no long-term impact (i.e., it will not affect the steady-state salt wedge) on confined aquifers. Our transient confined flow simulations show a self-reversal mechanism where the wedge which will initially intrude into the formation due to the sea-level rise would be naturally driven back to the original position. In unconfined systems, the lifting process would have a lesser influence due to changes in the value of effective transmissivity. A detailed sensitivity analysis was also completed to understand the sensitivity of this self-reversal effect to various aquifer parameters.