728 resultados para Skin Modelling
Resumo:
Background: The hedgehog signaling pathway is vital in early development, but then becomes dormant, except in some cancer tumours. Hedgehog inhibitors are being developed for potential use in cancer. Objectives/Methods: The objective of this evaluation is to review the initial clinical studies of the hedgehog inhibitor, GDC-0449, in subjects with cancer. Results: Phase I trials have shown that GDC-0449 has benefits in subjects with metastatic or locally advanced basal-cell carcinoma and in one subjects with medulloblastoma. GDC-0449 was well tolerated. Conclusions: Long term efficacy and safety studies of GDC-0449 in these conditions and other solid cancers are now underway. These clinical trials with GDC-0449, and trials with other hedgehog inhibitors, will reveal whether it is beneficial and safe to inhibit the hedgehog pathway, in a wide range of solid tumours or not.
Resumo:
With an increasing level of collaboration amongst researchers, software developers and industry practitioners in the past three decades, building information modelling (BIM) is now recognized as an emerging technological and procedural shift within the architect, engineering and construction (AEC) industry. BIM is not only considered as a way to make a profound impact on the professions of AEC, but is also regarded as an approach to assist the industry to develop new ways of thinking and practice. Despite the widespread development and recognition of BIM, a succinct and systematic review of the existing BIM research and achievement is scarce. It is also necessary to take stock on existing applications and have a fresh look at where BIM should be heading and how it can benefit from the advances being made. This paper first presents a review of BIM research and achievement in AEC industry. A number of suggestions are then made for future research in BIM. This paper maintains that the value of BIM during design and construction phases is well documented over the last decade, and new research needs to expand the level of development and analysis from design/build stage to postconstruction and facility asset management. New research in BIM could also move beyond the traditional building type to managing the broader range of facilities and built assets and providing preventative maintenance schedules for sustainable and intelligent buildings
Resumo:
Many infrastructure and necessity systems such as electricity and telecommunication in Europe and the Northern America were used to be operated as monopolies, if not state-owned. However, they have now been disintegrated into a group of smaller companies managed by different stakeholders. Railways are no exceptions. Since the early 1980s, there have been reforms in the shape of restructuring of the national railways in different parts of the world. Continuous refinements are still conducted to allow better utilisation of railway resources and quality of service. There has been a growing interest for the industry to understand the impacts of these reforms on the operation efficiency and constraints. A number of post-evaluations have been conducted by analysing the performance of the stakeholders on their profits (Crompton and Jupe 2003), quality of train service (Shaw 2001) and engineering operations (Watson 2001). Results from these studies are valuable for future improvement in the system, followed by a new cycle of post-evaluations. However, direct implementation of these changes is often costly and the consequences take a long period of time (e.g. years) to surface. With the advance of fast computing technologies, computer simulation is a cost-effective means to evaluate a hypothetical change in a system prior to actual implementation. For example, simulation suites have been developed to study a variety of traffic control strategies according to sophisticated models of train dynamics, traction and power systems (Goodman, Siu and Ho 1998, Ho and Yeung 2001). Unfortunately, under the restructured railway environment, it is by no means easy to model the complex behaviour of the stakeholders and the interactions between them. Multi-agent system (MAS) is a recently developed modelling technique which may be useful in assisting the railway industry to conduct simulations on the restructured railway system. In MAS, a real-world entity is modelled as a software agent that is autonomous, reactive to changes, able to initiate proactive actions and social communicative acts. It has been applied in the areas of supply-chain management processes (García-Flores, Wang and Goltz 2000, Jennings et al. 2000a, b) and e-commerce activities (Au, Ngai and Parameswaran 2003, Liu and You 2003), in which the objectives and behaviour of the buyers and sellers are captured by software agents. It is therefore beneficial to investigate the suitability or feasibility of applying agent modelling in railways and the extent to which it might help in developing better resource management strategies. This paper sets out to examine the benefits of using MAS to model the resource management process in railways. Section 2 first describes the business environment after the railway 2 Modelling issues on the railway resource management process using MAS reforms. Then the problems emerge from the restructuring process are identified in section 3. Section 4 describes the realisation of a MAS for railway resource management under the restructured scheme and the feasible studies expected from the model.
Resumo:
This paper presents the simulation model development of passenger flow in a metro station. The model allows studies of passenger flow in stations with different layouts and facilities, thus providing valuable information, such as passenger flow and density of passenger at critical locations and passenger-handling facilities within a station, to the operators. The adoption of the concept of Petri nets in the simulation model is discussed. Examples are provided to demonstrate its application to passenger flow analysis, train scheduling and the testing of alternative station layouts.
Resumo:
Popular wireless network standards, such as IEEE 802.11/15/16, are increasingly adopted in real-time control systems. However, they are not designed for real-time applications. Therefore, the performance of such wireless networks needs to be carefully evaluated before the systems are implemented and deployed. While efforts have been made to model general wireless networks with completely random traffic generation, there is a lack of theoretical investigations into the modelling of wireless networks with periodic real-time traffic. Considering the widely used IEEE 802.11 standard, with the focus on its distributed coordination function (DCF), for soft-real-time control applications, this paper develops an analytical Markov model to quantitatively evaluate the network quality-of-service (QoS) performance in periodic real-time traffic environments. Performance indices to be evaluated include throughput capacity, transmission delay and packet loss ratio, which are crucial for real-time QoS guarantee in real-time control applications. They are derived under the critical real-time traffic condition, which is formally defined in this paper to characterize the marginal satisfaction of real-time performance constraints.
Resumo:
Proper application of sunscreen is essential as an effective public health strategy for skin cancer prevention. Insufficient application is common among sunbathers, results in decreased sun protection and may therefore lead to increased UV damage of the skin. However, no objective measure of sunscreen application thickness (SAT) is currently available for field-based use. We present a method to detect SAT on human skin for determining the amount of sunscreen applied and thus enabling comparisons to manufacturer recommendations. Using a skin swabbing method and subsequent spectrophotometric analysis, we were able to determine SAT on human skin. A swabbing method was used to derive SAT on skin (in mg sunscreen per cm2 of skin area) through the concentration–absorption relationship of sunscreen determined in laboratory experiments. Analysis differentiated SATs between 0.25 and 4 mg cm−2 and showed a small but significant decrease in concentration over time postapplication. A field study was performed, in which the heterogeneity of sunscreen application could be investigated. The proposed method is a low cost, noninvasive method for the determination of SAT on skin and it can be used as a valid tool in field- and population-based studies.
Resumo:
Cutaneous cholecalciferol synthesis has not been considered in making recommendations for vitamin D intake. Our objective was to model the effects of sun exposure, vitamin D intake, and skin reflectance (pigmentation) on serum 25-hydroxyvitamin D (25[OH]D) in young adults with a wide range of skin reflectance and sun exposure. Four cohorts of participants (n = 72 total) were studied for 7-8 wk in the fall, winter, spring, and summer in Davis, CA [38.5° N, 121.7° W, Elev. 49 ft (15 m)]. Skin reflectance was measured using a spectrophotometer, vitamin D intake using food records, and sun exposure using polysulfone dosimeter badges. A multiple regression model (R^sup 2^ = 0.55; P < 0.0001) was developed and used to predict the serum 25(OH)D concentration for participants with low [median for African ancestry (AA)] and high [median for European ancestry (EA)] skin reflectance and with low [20th percentile, ~20 min/d, ~18% body surface area (BSA) exposed] and high (80th percentile, ~90 min/d, ~35% BSA exposed) sun exposure, assuming an intake of 200 IU/d (5 ug/d). Predicted serum 25(OH)D concentrations for AA individuals with low and high sun exposure in the winter were 24 and 42 nmol/L and in the summer were 40 and 60 nmol/L. Corresponding values for EA individuals were 35 and 60 nmol/L in the winter and in the summer were 58 and 85 nmol/L. To achieve 25(OH)D ≥75 nmol/L, we estimate that EA individuals with high sun exposure need 1300 IU/d vitamin D intake in the winter and AA individuals with low sun exposure need 2100-3100 IU/d year-round.
Resumo:
Real-world business processes are resource-intensive. In work environments human resources usually multitask, both human and non-human resources are typically shared between tasks, and multiple resources are sometimes necessary to undertake a single task. However, current Business Process Management Systems focus on task-resource allocation in terms of individual human resources only and lack support for a full spectrum of resource classes (e.g., human or non-human, application or non-application, individual or teamwork, schedulable or unschedulable) that could contribute to tasks within a business process. In this paper we develop a conceptual data model of resources that takes into account the various resource classes and their interactions. The resulting conceptual resource model is validated using a real-life healthcare scenario.
Resumo:
In a seminal data mining article, Leo Breiman [1] argued that to develop effective predictive classification and regression models, we need to move away from the sole dependency on statistical algorithms and embrace a wider toolkit of modeling algorithms that include data mining procedures. Nevertheless, many researchers still rely solely on statistical procedures when undertaking data modeling tasks; the sole reliance on these procedures has lead to the development of irrelevant theory and questionable research conclusions ([1], p.199). We will outline initiatives that the HPC & Research Support group is undertaking to engage researchers with data mining tools and techniques; including a new range of seminars, workshops, and one-on-one consultations covering data mining algorithms, the relationship between data mining and the research cycle, and limitations and problems with these new algorithms. Organisational limitations and restrictions to these initiatives are also discussed.
Resumo:
Introduction: Excessive exposure to ultraviolet (UV) radiation from sunlight is a causative factor in the development of skin damage and skin cancer. Little research has been undertaken into assessing the sun exposure linking to skin damage inside buildings or behind window glass. This project directly addressed this issue by aiming to assess the role that UV exposure has on skin damage for indoor workers and drivers. Methods: Measurements of personal UV exposure using UV sensitive polymer dosimeters were undertaken of 41 indoor workers and 3 professional drivers. Physical measurements of skin characteristics including skin pigmentation and UV induced skin photoaging were also determined. In addition, demographic information along with phenotypic characteristics, sun exposure and sun protection practice history, and history of skin damage were assessed through a questionnaire. Results: Indoor workers typically received low doses of UV radiation. However, one driver received a high dose (13J/cm2 UVA and 4.99 MED UVB on the arm). Age and years residing in Australia had a positive correlation with UV induced skin pigmentation. The number of major sunburns before 18 years was a risk factor for skin damage in adults. Those participants with fair skin, non-black hair and blue/green /blue-grey eye were more likely to have skin damage related to sun exposure. Conclusions: A person’s age, years residing in Australia, numbers of major sunburn, skin colour, hair colour and eye colour are important factors associated with the development of sun-related skin damage in workers. ‘Real World’ implications: 1. The number of major sunburns before 18 years was a risk factor for skin damage in adults. This clearly confirms the importance of early prevention. To protect the skin from extensive sun exposure for your generation should have significance for further prevention of skin damage. 2. It is unsurprising that age and years residing in Australia were associated with skin damage related UV radiation. Therefore, the general public should reinforce their sun protective measures and check skin regularly. 3. Drivers should take sun protective measures during their working hours between sunrise and sunset.
Resumo:
The link between measured sub-saturated hygroscopicity and cloud activation potential of secondary organic aerosol particles produced by the chamber photo-oxidation of α-pinene in the presence or absence of ammonium sulphate seed aerosol was investigated using two models of varying complexity. A simple single hygroscopicity parameter model and a more complex model (incorporating surface effects) were used to assess the detail required to predict the cloud condensation nucleus (CCN) activity from the subsaturated water uptake. Sub-saturated water uptake measured by three hygroscopicity tandem differential mobility analyser (HTDMA) instruments was used to determine the water activity for use in the models. The predicted CCN activity was compared to the measured CCN activation potential using a continuous flow CCN counter. Reconciliation using the more complex model formulation with measured cloud activation could be achieved widely different assumed surface tension behavior of the growing droplet; this was entirely determined by the instrument used as the source of water activity data. This unreliable derivation of the water activity as a function of solute concentration from sub-saturated hygroscopicity data indicates a limitation in the use of such data in predicting cloud condensation nucleus behavior of particles with a significant organic fraction. Similarly, the ability of the simpler single parameter model to predict cloud activation behaviour was dependent on the instrument used to measure sub-saturated hygroscopicity and the relative humidity used to provide the model input. However, agreement was observed for inorganic salt solution particles, which were measured by all instruments in agreement with theory. The difference in HTDMA data from validated and extensively used instruments means that it cannot be stated with certainty the detail required to predict the CCN activity from sub-saturated hygroscopicity. In order to narrow the gap between measurements of hygroscopic growth and CCN activity the processes involved must be understood and the instrumentation extensively quality assured. It is impossible to say from the results presented here due to the differences in HTDMA data whether: i) Surface tension suppression occurs ii) Bulk to surface partitioning is important iii) The water activity coefficient changes significantly as a function of the solute concentration.
Resumo:
Software used by architectural and industrial designers – has moved from becoming a tool for drafting, towards use in verification, simulation, project management and project sharing remotely. In more advanced models, parameters for the designed object can be adjusted so a family of variations can be produced rapidly. With advances in computer aided design technology, numerous design options can now be generated and analyzed in real time. However the use of digital tools to support design as an activity is still at an early stage and has largely been limited in functionality with regard to the design process. To date, major CAD vendors have not developed an integrated tool that is able to both leverage specialized design knowledge from various discipline domains (known as expert knowledge systems) and support the creation of design alternatives that satisfy different forms of constraints. We propose that evolutionary computing and machine learning be linked with parametric design techniques to record and respond to a designer’s own way of working and design history. It is expected that this will lead to results that impact on future work on design support systems-(ergonomics and interface) as well as implicit constraint and problem definition for problems that are difficult to quantify.
Resumo:
Background, aim, and scope Urban motor vehicle fleets are a major source of particulate matter pollution, especially of ultrafine particles (diameters < 0.1 µm), and exposure to particulate matter has known serious health effects. A considerable body of literature is available on vehicle particle emission factors derived using a wide range of different measurement methods for different particle sizes, conducted in different parts of the world. Therefore the choice as to which are the most suitable particle emission factors to use in transport modelling and health impact assessments presented as a very difficult task. The aim of this study was to derive a comprehensive set of tailpipe particle emission factors for different vehicle and road type combinations, covering the full size range of particles emitted, which are suitable for modelling urban fleet emissions. Materials and methods A large body of data available in the international literature on particle emission factors for motor vehicles derived from measurement studies was compiled and subjected to advanced statistical analysis, to determine the most suitable emission factors to use in modelling urban fleet emissions. Results This analysis resulted in the development of five statistical models which explained 86%, 93%, 87%, 65% and 47% of the variation in published emission factors for particle number, particle volume, PM1, PM2.5 and PM10 respectively. A sixth model for total particle mass was proposed but no significant explanatory variables were identified in the analysis. From the outputs of these statistical models, the most suitable particle emission factors were selected. This selection was based on examination of the statistical robustness of the statistical model outputs, including consideration of conservative average particle emission factors with the lowest standard errors, narrowest 95% confidence intervals and largest sample sizes, and the explanatory model variables, which were Vehicle Type (all particle metrics), Instrumentation (particle number and PM2.5), Road Type (PM10) and Size Range Measured and Speed Limit on the Road (particle volume). Discussion A multiplicity of factors need to be considered in determining emission factors that are suitable for modelling motor vehicle emissions, and this study derived a set of average emission factors suitable for quantifying motor vehicle tailpipe particle emissions in developed countries. Conclusions The comprehensive set of tailpipe particle emission factors presented in this study for different vehicle and road type combinations enable the full size range of particles generated by fleets to be quantified, including ultrafine particles (measured in terms of particle number). These emission factors have particular application for regions which may have a lack of funding to undertake measurements, or insufficient measurement data upon which to derive emission factors for their region. Recommendations and perspectives In urban areas motor vehicles continue to be a major source of particulate matter pollution and of ultrafine particles. It is critical that in order to manage this major pollution source methods are available to quantify the full size range of particles emitted for traffic modelling and health impact assessments.
Resumo:
The multi-criteria decision making methods, Preference METHods for Enrichment Evaluation (PROMETHEE) and Graphical Analysis for Interactive Assistance (GAIA), and the two-way Positive Matrix Factorization (PMF) receptor model were applied to airborne fine particle compositional data collected at three sites in Hong Kong during two monitoring campaigns held from November 2000 to October 2001 and November 2004 to October 2005. PROMETHEE/GAIA indicated that the three sites were worse during the later monitoring campaign, and that the order of the air quality at the sites during each campaign was: rural site > urban site > roadside site. The PMF analysis on the other hand, identified 6 common sources at all of the sites (diesel vehicle, fresh sea salt, secondary sulphate, soil, aged sea salt and oil combustion) which accounted for approximately 68.8 ± 8.7% of the fine particle mass at the sites. In addition, road dust, gasoline vehicle, biomass burning, secondary nitrate, and metal processing were identified at some of the sites. Secondary sulphate was found to be the highest contributor to the fine particle mass at the rural and urban sites with vehicle emission as a high contributor to the roadside site. The PMF results are broadly similar to those obtained in a previous analysis by PCA/APCS. However, the PMF analysis resolved more factors at each site than the PCA/APCS. In addition, the study demonstrated that combined results from multi-criteria decision making analysis and receptor modelling can provide more detailed information that can be used to formulate the scientific basis for mitigating air pollution in the region.
Resumo:
Visualisation provides a method to efficiently convey and understand the complex nature and processes of groundwater systems. This technique has been applied to the Lockyer Valley to aid in comprehending the current condition of the system. The Lockyer Valley in southeast Queensland hosts intensive irrigated agriculture sourcing groundwater from alluvial aquifers. The valley is around 3000 km2 in area and the alluvial deposits are typically 1-3 km wide and to 20-35 m deep in the main channels, reducing in size in subcatchments. The configuration of the alluvium is of a series of elongate “fingers”. In this roughly circular valley recharge to the alluvial aquifers is largely from seasonal storm events, on the surrounding ranges. The ranges are overlain by basaltic aquifers of Tertiary age, which overall are quite transmissive. Both runoff from these ranges and infiltration into the basalts provided ephemeral flow to the streams of the valley. Throughout the valley there are over 5,000 bores extracting alluvial groundwater, plus lesser numbers extracting from underlying sandstone bedrock. Although there are approximately 2500 monitoring bores, the only regularly monitored area is the formally declared management zone in the lower one third. This zone has a calibrated Modflow model (Durick and Bleakly, 2000); a broader valley Modflow model was developed in 2002 (KBR), but did not have extensive extraction data for detailed calibration. Another Modflow model focused on a central area river confluence (Wilson, 2005) with some local production data and pumping test results. A recent subcatchment simulation model incorporates a network of bores with short-period automated hydrographic measurements (Dvoracek and Cox, 2008). The above simulation models were all based on conceptual hydrogeological models of differing scale and detail.