301 resultados para Possible solutions
Resumo:
Neutrophils constitute 50-60% of all circulating leukocytes; they present the first line of microbicidal defense and are involved in inflammatory responses. To examine immunocompetence in athletes, numerous studies have investigated the effects of exercise on the number of circulating neutrophils and their response to stimulation by chemotactic stimuli and activating factors. Exercise causes a biphasic increase in the number of neutrophils in the blood, arising from increases in catecholamine and cortisol concentrations. Moderate intensity exercise may enhance neutrophil respiratory burst activity, possibly through increases in the concentrations of growth hormone and the inflammatory cytokine IL-6. In contrast, intense or long duration exercise may suppress neutrophil degranulation and the production of reactive oxidants via elevated circulating concentrations of epinephrine (adrenaline) and cortisol. There is evidence of neutrophil degranulation and activation of the respiratory burst following exercise-induced muscle damage. In principle, improved responsiveness of neutrophils to stimulation following exercise of moderate intensity could mean that individuals participating in moderate exercise may have improved resistance to infection. Conversely, competitive athletes undertaking regular intense exercise may be at greater risk of contracting illness. However, there are limited data to support this concept. To elucidate the cellular mechanisms involved in the neutrophil responses to exercise, researchers have examined changes in the expression of cell membrane receptors, the production and release of reactive oxidants and more recently, calcium signaling. The investigation of possible modifications of other signal transduction events following exercise has not been possible because of current methodological limitations. At present, variation in exercise-induced alterations in neutrophil function appears to be due to differences in exercise protocols, training status, sampling points and laboratory assay techniques.
Resumo:
The health impacts of exposure to ambient temperature have been drawing increasing attention from the environmental health research community, government, society, industries, and the public. Case-crossover and time series models are most commonly used to examine the effects of ambient temperature on mortality. However, some key methodological issues remain to be addressed. For example, few studies have used spatiotemporal models to assess the effects of spatial temperatures on mortality. Few studies have used a case-crossover design to examine the delayed (distributed lag) and non-linear relationship between temperature and mortality. Also, little evidence is available on the effects of temperature changes on mortality, and on differences in heat-related mortality over time. This thesis aimed to address the following research questions: 1. How to combine case-crossover design and distributed lag non-linear models? 2. Is there any significant difference in effect estimates between time series and spatiotemporal models? 3. How to assess the effects of temperature changes between neighbouring days on mortality? 4. Is there any change in temperature effects on mortality over time? To combine the case-crossover design and distributed lag non-linear model, datasets including deaths, and weather conditions (minimum temperature, mean temperature, maximum temperature, and relative humidity), and air pollution were acquired from Tianjin China, for the years 2005 to 2007. I demonstrated how to combine the case-crossover design with a distributed lag non-linear model. This allows the case-crossover design to estimate the non-linear and delayed effects of temperature whilst controlling for seasonality. There was consistent U-shaped relationship between temperature and mortality. Cold effects were delayed by 3 days, and persisted for 10 days. Hot effects were acute and lasted for three days, and were followed by mortality displacement for non-accidental, cardiopulmonary, and cardiovascular deaths. Mean temperature was a better predictor of mortality (based on model fit) than maximum or minimum temperature. It is still unclear whether spatiotemporal models using spatial temperature exposure produce better estimates of mortality risk compared with time series models that use a single site’s temperature or averaged temperature from a network of sites. Daily mortality data were obtained from 163 locations across Brisbane city, Australia from 2000 to 2004. Ordinary kriging was used to interpolate spatial temperatures across the city based on 19 monitoring sites. A spatiotemporal model was used to examine the impact of spatial temperature on mortality. A time series model was used to assess the effects of single site’s temperature, and averaged temperature from 3 monitoring sites on mortality. Squared Pearson scaled residuals were used to check the model fit. The results of this study show that even though spatiotemporal models gave a better model fit than time series models, spatiotemporal and time series models gave similar effect estimates. Time series analyses using temperature recorded from a single monitoring site or average temperature of multiple sites were equally good at estimating the association between temperature and mortality as compared with a spatiotemporal model. A time series Poisson regression model was used to estimate the association between temperature change and mortality in summer in Brisbane, Australia during 1996–2004 and Los Angeles, United States during 1987–2000. Temperature change was calculated by the current day's mean temperature minus the previous day's mean. In Brisbane, a drop of more than 3 �C in temperature between days was associated with relative risks (RRs) of 1.16 (95% confidence interval (CI): 1.02, 1.31) for non-external mortality (NEM), 1.19 (95% CI: 1.00, 1.41) for NEM in females, and 1.44 (95% CI: 1.10, 1.89) for NEM aged 65.74 years. An increase of more than 3 �C was associated with RRs of 1.35 (95% CI: 1.03, 1.77) for cardiovascular mortality and 1.67 (95% CI: 1.15, 2.43) for people aged < 65 years. In Los Angeles, only a drop of more than 3 �C was significantly associated with RRs of 1.13 (95% CI: 1.05, 1.22) for total NEM, 1.25 (95% CI: 1.13, 1.39) for cardiovascular mortality, and 1.25 (95% CI: 1.14, 1.39) for people aged . 75 years. In both cities, there were joint effects of temperature change and mean temperature on NEM. A change in temperature of more than 3 �C, whether positive or negative, has an adverse impact on mortality even after controlling for mean temperature. I examined the variation in the effects of high temperatures on elderly mortality (age . 75 years) by year, city and region for 83 large US cities between 1987 and 2000. High temperature days were defined as two or more consecutive days with temperatures above the 90th percentile for each city during each warm season (May 1 to September 30). The mortality risk for high temperatures was decomposed into: a "main effect" due to high temperatures using a distributed lag non-linear function, and an "added effect" due to consecutive high temperature days. I pooled yearly effects across regions and overall effects at both regional and national levels. The effects of high temperature (both main and added effects) on elderly mortality varied greatly by year, city and region. The years with higher heat-related mortality were often followed by those with relatively lower mortality. Understanding this variability in the effects of high temperatures is important for the development of heat-warning systems. In conclusion, this thesis makes contribution in several aspects. Case-crossover design was combined with distribute lag non-linear model to assess the effects of temperature on mortality in Tianjin. This makes the case-crossover design flexibly estimate the non-linear and delayed effects of temperature. Both extreme cold and high temperatures increased the risk of mortality in Tianjin. Time series model using single site’s temperature or averaged temperature from some sites can be used to examine the effects of temperature on mortality. Temperature change (no matter significant temperature drop or great temperature increase) increases the risk of mortality. The high temperature effect on mortality is highly variable from year to year.
Resumo:
We present a rigorous validation of the analytical Amadei solution for the stress concentration around an arbitrarily orientated borehole in general anisotropic elastic media. First, we revisit the theoretical framework of the Amadei solution and present analytical insights that show that the solution does indeed contain all special cases of symmetry, contrary to previous understanding, provided that the reduced strain coefficients b11 and b55 are not equal. It is shown from theoretical considerations and published experimental data that the b11 and b55 are not equal for realistic rocks. Second, we develop a 3D finite element elastic model within a hybrid analytical–numerical workflow that circumvents the need to rebuild and remesh the model for every borehole and material orientation. Third, we show that the borehole stresses computed from the numerical model and the analytical solution match almost perfectly for different borehole orientations (vertical, deviated and horizontal) and for several cases involving isotropic, transverse isotropic and orthorhombic symmetries. It is concluded that the analytical Amadei solution is valid with no restriction on the borehole orientation or the symmetry of the elastic anisotropy.
Resumo:
We report on an accurate numerical scheme for the evolution of an inviscid bubble in radial Hele-Shaw flow, where the nonlinear boundary effects of surface tension and kinetic undercooling are included on the bubble-fluid interface. As well as demonstrating the onset of the Saffman-Taylor instability for growing bubbles, the numerical method is used to show the effect of the boundary conditions on the separation (pinch-off) of a contracting bubble into multiple bubbles, and the existence of multiple possible asymptotic bubble shapes in the extinction limit. The numerical scheme also allows for the accurate computation of bubbles which pinch off very close to the theoretical extinction time, raising the possibility of computing solutions for the evolution of bubbles with non-generic extinction behaviour.
Resumo:
The work presented in this thesis investigates the mathematical modelling of charge transport in electrolyte solutions, within the nanoporous structures of electrochemical devices. We compare two approaches found in the literature, by developing onedimensional transport models based on the Nernst-Planck and Maxwell-Stefan equations. The development of the Nernst-Planck equations relies on the assumption that the solution is infinitely dilute. However, this is typically not the case for the electrolyte solutions found within electrochemical devices. Furthermore, ionic concentrations much higher than those of the bulk concentrations can be obtained near the electrode/electrolyte interfaces due to the development of an electric double layer. Hence, multicomponent interactions which are neglected by the Nernst-Planck equations may become important. The Maxwell-Stefan equations account for these multicomponent interactions, and thus they should provide a more accurate representation of transport in electrolyte solutions. To allow for the effects of the electric double layer in both the Nernst-Planck and Maxwell-Stefan equations, we do not assume local electroneutrality in the solution. Instead, we model the electrostatic potential as a continuously varying function, by way of Poisson’s equation. Importantly, we show that for a ternary electrolyte solution at high interfacial concentrations, the Maxwell-Stefan equations predict behaviour that is not recovered from the Nernst-Planck equations. The main difficulty in the application of the Maxwell-Stefan equations to charge transport in electrolyte solutions is knowledge of the transport parameters. In this work, we apply molecular dynamics simulations to obtain the required diffusivities, and thus we are able to incorporate microscopic behaviour into a continuum scale model. This is important due to the small size scales we are concerned with, as we are still able to retain the computational efficiency of continuum modelling. This approach provides an avenue by which the microscopic behaviour may ultimately be incorporated into a full device-scale model. The one-dimensional Maxwell-Stefan model is extended to two dimensions, representing an important first step for developing a fully-coupled interfacial charge transport model for electrochemical devices. It allows us to begin investigation into ambipolar diffusion effects, where the motion of the ions in the electrolyte is affected by the transport of electrons in the electrode. As we do not consider modelling in the solid phase in this work, this is simulated by applying a time-varying potential to one interface of our two-dimensional computational domain, thus allowing a flow field to develop in the electrolyte. Our model facilitates the observation of the transport of ions near the electrode/electrolyte interface. For the simulations considered in this work, we show that while there is some motion in the direction parallel to the interface, the interfacial coupling is not sufficient for the ions in solution to be "dragged" along the interface for long distances.
Resumo:
The objective of this PhD research program is to investigate numerical methods for simulating variably-saturated flow and sea water intrusion in coastal aquifers in a high-performance computing environment. The work is divided into three overlapping tasks: to develop an accurate and stable finite volume discretisation and numerical solution strategy for the variably-saturated flow and salt transport equations; to implement the chosen approach in a high performance computing environment that may have multiple GPUs or CPU cores; and to verify and test the implementation. The geological description of aquifers is often complex, with porous materials possessing highly variable properties, that are best described using unstructured meshes. The finite volume method is a popular method for the solution of the conservation laws that describe sea water intrusion, and is well-suited to unstructured meshes. In this work we apply a control volume-finite element (CV-FE) method to an extension of a recently proposed formulation (Kees and Miller, 2002) for variably saturated groundwater flow. The CV-FE method evaluates fluxes at points where material properties and gradients in pressure and concentration are consistently defined, making it both suitable for heterogeneous media and mass conservative. Using the method of lines, the CV-FE discretisation gives a set of differential algebraic equations (DAEs) amenable to solution using higher-order implicit solvers. Heterogeneous computer systems that use a combination of computational hardware such as CPUs and GPUs, are attractive for scientific computing due to the potential advantages offered by GPUs for accelerating data-parallel operations. We present a C++ library that implements data-parallel methods on both CPU and GPUs. The finite volume discretisation is expressed in terms of these data-parallel operations, which gives an efficient implementation of the nonlinear residual function. This makes the implicit solution of the DAE system possible on the GPU, because the inexact Newton-Krylov method used by the implicit time stepping scheme can approximate the action of a matrix on a vector using residual evaluations. We also propose preconditioning strategies that are amenable to GPU implementation, so that all computationally-intensive aspects of the implicit time stepping scheme are implemented on the GPU. Results are presented that demonstrate the efficiency and accuracy of the proposed numeric methods and formulation. The formulation offers excellent conservation of mass, and higher-order temporal integration increases both numeric efficiency and accuracy of the solutions. Flux limiting produces accurate, oscillation-free solutions on coarse meshes, where much finer meshes are required to obtain solutions with equivalent accuracy using upstream weighting. The computational efficiency of the software is investigated using CPUs and GPUs on a high-performance workstation. The GPU version offers considerable speedup over the CPU version, with one GPU giving speedup factor of 3 over the eight-core CPU implementation.
Resumo:
Evidence based practice (EBP) focuses on solving ‘tame’ problems, where literature supports question construction toward determining a solution. What happens when there is no existing evidence, or when the need for agility precludes a full EBP implementation? How might we build a more agile and innovative practice that facilitates the design of solutions to complex and wicked problems, particularly in cases where there is no existing literature? As problem solving and innovation methods, EBP and design thinking overlap considerably. The literature indicates the potential benefits to be gained for evidence based practice from adopting a human-centred rather than literature-focused foundation. The design thinking process is social and collaborative by nature, which enables it to be more agile and produce more innovative results than evidence based practice. This paper recommends a hybrid approach to maximise the strengths and benefits of the two methods for designing solutions to wicked problems. Incorporating design thinking principles and tools into EBP has the potential to move its applicability beyond tame problems and continuous improvement, and toward wicked problem solving and innovation. The potential of this hybrid approach in practice is yet to be explored.
Resumo:
We consider a model for thin film flow down the outside and inside of a vertical cylinder. Our focus is to study the effect that the curvature of the cylinder has on the gravity-driven instability of the advancing contact line and to simulate the resulting fingering patterns that form due to this instability. The governing partial differential equation is fourth order with a nonlinear degenerate diffusion term that represents the stabilising effect of surface tension. We present numerical solutions obtained by implementing an efficient alternating direction implicit scheme. When compared to the problem of flow down a vertical plane, we find that increasing substrate curvature tends to increase the fingering instability for flow down the outside of the cylinder, whereas flow down the inside of the cylinder substrate curvature has the opposite effect. Further, we demonstrate the existence of nontrivial travelling wave solutions which describe fingering patterns that propagate down the inside of a cylinder at constant speed without changing form. These solutions are perfectly analogous to those found previously for thin film flow down an inclined plane.
Resumo:
This paper takes its root in a trivial observation: management approaches are unable to provide relevant guidelines to cope with uncertainty, and trust of our modern worlds. Thus, managers are looking for reducing uncertainty through information’s supported decision-making, sustained by ex-ante rationalization. They strive to achieve best possible solution, stability, predictability, and control of “future”. Hence, they turn to a plethora of “prescriptive panaceas”, and “management fads” to bring simple solutions through best practices. However, these solutions are ineffective. They address only one part of a system (e.g. an organization) instead of the whole. They miss the interactions and interdependencies with other parts leading to “suboptimization”. Further classical cause-effects investigations and researches are not very helpful to this regard. Where do we go from there? In this conversation, we want to challenge the assumptions supporting the traditional management approaches and shed some lights on the problem of management discourse fad using the concept of maturity and maturity models in the context of temporary organizations as support for reflexion. Global economy is characterized by use and development of standards and compliance to standards as a practice is said to enable better decision-making by managers in uncertainty, control complexity, and higher performance. Amongst the plethora of standards, organizational maturity and maturity models hold a specific place due to general belief in organizational performance as dependent variable of (business) processes continuous improvement, grounded on a kind of evolutionary metaphor. Our intention is neither to offer a new “evidence based management fad” for practitioners, nor to suggest research gap to scholars. Rather, we want to open an assumption-challenging conversation with regards to main stream approaches (neo-classical economics and organization theory), turning “our eyes away from the blinding light of eternal certitude towards the refracted world of turbid finitude” (Long, 2002, p. 44) generating what Bernstein has named “Cartesian Anxiety” (Bernstein, 1983, p. 18), and revisit the conceptualization of maturity and maturity models. We rely on conventions theory and a systemic-discursive perspective. These two lenses have both information & communication and self-producing systems as common threads. Furthermore the narrative approach is well suited to explore complex way of thinking about organizational phenomena as complex systems. This approach is relevant with our object of curiosity, i.e. the concept of maturity and maturity models, as maturity models (as standards) are discourses and systems of regulations. The main contribution of this conversation is that we suggest moving from a neo-classical “theory of the game” aiming at making the complex world simpler in playing the game, to a “theory of the rules of the game”, aiming at influencing and challenging the rules of the game constitutive of maturity models – conventions, governing systems – making compatible individual calculation and social context, and possible the coordination of relationships and cooperation between agents with or potentially divergent interests and values. A second contribution is the reconceptualization of maturity as structural coupling between conventions, rather than as an independent variable leading to organizational performance.
Resumo:
Application of "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures permits direct and accurate determination of ultimate system strengths, without resort to simplified elastic methods of analysis and semi-empirical specification equations. However, the application of advanced analysis methods has previously been restricted to steel frames comprising only compact sections that are not influenced by the effects of local buckling. A research project has been conducted with the aim of developing concentrated plasticity methods suitable for practical advanced analysis of steel frame structures comprising non-compact sections. This paper contains a comprehensive set of analytical benchmark solutions for steel frames comprising non-compact sections, which can be used to verify the accuracy of simplified concentrated plasticity methods of advanced analysis. The analytical benchmark solutions were obtained using a distributed plasticity shell finite element model that explicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. A brief description and verification of the shell finite element model is provided in this paper.
Resumo:
During the last several decades, the quality of natural resources and their services have been exposed to significant degradation from increased urban populations combined with the sprawl of settlements, development of transportation networks and industrial activities (Dorsey, 2003; Pauleit et al., 2005). As a result of this environmental degradation, a sustainable framework for urban development is required to provide the resilience of natural resources and ecosystems. Sustainable urban development refers to the management of cities with adequate infrastructure to support the needs of its population for the present and future generations as well as maintain the sustainability of its ecosystems (UNEP/IETC, 2002; Yigitcanlar, 2010). One of the important strategic approaches for planning sustainable cities is „ecological planning‟. Ecological planning is a multi-dimensional concept that aims to preserve biodiversity richness and ecosystem productivity through the sustainable management of natural resources (Barnes et al., 2005). As stated by Baldwin (1985, p.4), ecological planning is the initiation and operation of activities to direct and control the acquisition, transformation, disruption and disposal of resources in a manner capable of sustaining human activities with a minimum disruption of ecosystem processes. Therefore, ecological planning is a powerful method for creating sustainable urban ecosystems. In order to explore the city as an ecosystem and investigate the interaction between the urban ecosystem and human activities, a holistic urban ecosystem sustainability assessment approach is required. Urban ecosystem sustainability assessment serves as a tool that helps policy and decision-makers in improving their actions towards sustainable urban development. There are several methods used in urban ecosystem sustainability assessment among which sustainability indicators and composite indices are the most commonly used tools for assessing the progress towards sustainable land use and urban management. Currently, a variety of composite indices are available to measure the sustainability at the local, national and international levels. However, the main conclusion drawn from the literature review is that they are too broad to be applied to assess local and micro level sustainability and no benchmark value for most of the indicators exists due to limited data availability and non-comparable data across countries. Mayer (2008, p. 280) advocates that by stating "as different as the indices may seem, many of them incorporate the same underlying data because of the small number of available sustainability datasets". Mori and Christodoulou (2011) also argue that this relative evaluation and comparison brings along biased assessments, as data only exists for some entities, which also means excluding many nations from evaluation and comparison. Thus, there is a need for developing an accurate and comprehensive micro-level urban ecosystem sustainability assessment method. In order to develop such a model, it is practical to adopt an approach that uses a method to utilise indicators for collecting data, designate certain threshold values or ranges, perform a comparative sustainability assessment via indices at the micro-level, and aggregate these assessment findings to the local level. Hereby, through this approach and model, it is possible to produce sufficient and reliable data to enable comparison at the local level, and provide useful results to inform the local planning, conservation and development decision-making process to secure sustainable ecosystems and urban futures. To advance research in this area, this study investigated the environmental impacts of an existing urban context by using a composite index with an aim to identify the interaction between urban ecosystems and human activities in the context of environmental sustainability. In this respect, this study developed a new comprehensive urban ecosystem sustainability assessment tool entitled the „Micro-level Urban-ecosystem Sustainability IndeX‟ (MUSIX). The MUSIX model is an indicator-based indexing model that investigates the factors affecting urban sustainability in a local context. The model outputs provide local and micro-level sustainability reporting guidance to help policy-making concerning environmental issues. A multi-method research approach, which is based on both quantitative analysis and qualitative analysis, was employed in the construction of the MUSIX model. First, a qualitative research was conducted through an interpretive and critical literature review in developing a theoretical framework and indicator selection. Afterwards, a quantitative research was conducted through statistical and spatial analyses in data collection, processing and model application. The MUSIX model was tested in four pilot study sites selected from the Gold Coast City, Queensland, Australia. The model results detected the sustainability performance of current urban settings referring to six main issues of urban development: (1) hydrology, (2) ecology, (3) pollution, (4) location, (5) design, and; (6) efficiency. For each category, a set of core indicators was assigned which are intended to: (1) benchmark the current situation, strengths and weaknesses, (2) evaluate the efficiency of implemented plans, and; (3) measure the progress towards sustainable development. While the indicator set of the model provided specific information about the environmental impacts in the area at the parcel scale, the composite index score provided general information about the sustainability of the area at the neighbourhood scale. Finally, in light of the model findings, integrated ecological planning strategies were developed to guide the preparation and assessment of development and local area plans in conjunction with the Gold Coast Planning Scheme, which establishes regulatory provisions to achieve ecological sustainability through the formulation of place codes, development codes, constraint codes and other assessment criteria that provide guidance for best practice development solutions. These relevant strategies can be summarised as follows: • Establishing hydrological conservation through sustainable stormwater management in order to preserve the Earth’s water cycle and aquatic ecosystems; • Providing ecological conservation through sustainable ecosystem management in order to protect biological diversity and maintain the integrity of natural ecosystems; • Improving environmental quality through developing pollution prevention regulations and policies in order to promote high quality water resources, clean air and enhanced ecosystem health; • Creating sustainable mobility and accessibility through designing better local services and walkable neighbourhoods in order to promote safe environments and healthy communities; • Sustainable design of urban environment through climate responsive design in order to increase the efficient use of solar energy to provide thermal comfort, and; • Use of renewable resources through creating efficient communities in order to provide long-term management of natural resources for the sustainability of future generations.
Resumo:
Advances in Information and Communication Technologies have the potential to improve many facets of modern healthcare service delivery. The implementation of electronic health records systems is a critical part of an eHealth system. Despite the potential gains, there are several obstacles that limit the wider development of electronic health record systems. Among these are the perceived threats to the security and privacy of patients’ health data, and a widely held belief that these cannot be adequately addressed. We hypothesise that the major concerns regarding eHealth security and privacy cannot be overcome through the implementation of technology alone. Human dimensions must be considered when analysing the provision of the three fundamental information security goals: confidentiality, integrity and availability. A sociotechnical analysis to establish the information security and privacy requirements when designing and developing a given eHealth system is important and timely. A framework that accommodates consideration of the legislative requirements and human perspectives in addition to the technological measures is useful in developing a measurable and accountable eHealth system. Successful implementation of this approach would enable the possibilities, practicalities and sustainabilities of proposed eHealth systems to be realised.
Resumo:
Migraine is a common neurological disorder characterised by debilitating head pain and an assortment of additional symptoms which can include nausea, emesis, photophobia, phonophobia and occasionally visual sensory disturbances. Migraine is a complex disease caused by an interplay between predisposing genetic variants and environmental factors. It affects approximately 12 % of studied Caucasian populations with affected individuals being predominantly female. Genes involved in neurological, vascular or hormonal pathways have all been implicated in predisposition towards developing migraine. All of these are nuclear encoded genes, but given the role of mitochondria in a number of neurological disorders and in energy production it is possible that mitochondrial variants may play a role in the pathogenesis of this disease. Mitochondrial DNA has been a useful tool for studying population genetics and human genetic diseases due to the clear inheritance shown through successive generations. Given the clear gender bias found in migraine patients it may be important to investigate X-linked inheritance and mitochondrial-related variants in this disorder. This paper explores the possibility that mitochondrial DNA changes may play a role in migraine. Few variants in the mitochondrial genome have so far been investigated in migraine and new studies should be aimed towards investigating the role of mitochondrial DNA in this common disorder.
Resumo:
One of the most common ways to share project knowledge is to capture the positive and negative aspects of projects in the form of lessons learned (LL). If effectively used, this process can assist project managers in reusing project knowledge and preventing future projects from repeating mistakes. Nevertheless, the process of capturing, storing, reviewing and reusing LL often remains suboptimal. Despite the potential for rich knowledge capture, lessons are often documented as simple, line-item statements devoid of context. Findings from an empirical investigation across four cases revealed a range of reasons related to the perceived quality, process and visibility of LL that lead to their limited use and application. Drawn from the cross-case analysis, this paper investigates an integrated approach to LL involving the use of a collaborative Web-based tool, which is easily accessible, intelligible and user-friendly, allowing more effective sharing of project knowledge and overcoming existing problems with LL.