899 resultados para removing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Researching administrative history is problematical. A trail of authoritative documents is often hard to find; and useful summaries can be difficult to organise, especially if source material is in paper formats in geographically dispersed locations. In the absence of documents, the reasons for particular decisions and the rationale underpinning particular policies can be confounded as key personnel advance in their professions and retire. The rationale for past decisions may be lost for practical purposes; and if an organisation’s memory of events is diminished, its learning through experience is also diminished. Publishing this document tries to avoid unnecessary duplication of effort by other researchers that need to venture into how policies of charging for public sector information have been justified. The author compiled this work within a somewhat limited time period and the work does not pretend to be a complete or comprehensive analysis of the issues.----- A significant part of the role of government is to provide a framework of legally-enforceable rights and obligations that can support individuals and non-government organisations in their lawful activities. Accordingly, claims that governments should be more ‘business-like’ need careful scrutiny. A significant supply of goods and services occurs as non-market activity where neither benefits nor costs are quantified within conventional accounting systems or in terms of money. Where a government decides to provide information as a service; and information from land registries is archetypical, the transactions occur as a political decision made under a direct or a clearly delegated authority of a parliament with the requisite constitutional powers. This is not a market transaction and the language of the market confuses attempts to describe a number of aspects of how governments allocate resources.----- Cost recovery can be construed as an aspect of taxation that is a sole prerogative of a parliament. The issues are fundamental to political constitutions; but they become more complicated where states cede some taxing powers to a central government as part of a federal system. Nor should the absence of markets be construed necessarily as ‘market failure’ or even ‘government failure’. The absence is often attributable to particular technical, economic and political constraints that preclude the operation of markets. Arguably, greater care is needed in distinguishing between the polity and markets in raising revenues and allocating resources; and that needs to start by removing unhelpful references to ‘business’ in the context of government decision-making.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since its launch in 2001, the Creative Commons open content licensing initiative has received both praise and censure. While some have touted it as a major step towards removing the burdens copyright law imposes on creativity and innovation in the digital age, others have argued that it robs artists of their rightful income. This paper aims to provide a brief overview and analysis of the practical application of the Creative Commons licences five years after their launch. It looks at how the Creative Commons licences are being used and who is using them, and attempts to identify likely motivations for doing so. By identifying trends in how this licence use has changed over time, it also attempts to rebut arguments that Creative Commons is a movement of academics and hobbyists, and has no value for traditional organisations or working artists.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multipotent mesenchymal stem cells (MSCs), first identified in the bone marrow, have subsequently been found in many other tissues, including fat, cartilage, muscle, and bone. Adipose tissue has been identified as an alternative to bone marrow as a source for the isolation of MSCs, as it is neither limited in volume nor as invasive in the harvesting. This study compares the multipotentiality of bone marrow-derived mesenchymal stem cells (BMSCs) with that of adipose-derived mesenchymal stem cells (AMSCs) from 12 age- and sex-matched donors. Phenotypically, the cells are very similar, with only three surface markers, CD106, CD146, and HLA-ABC, differentially expressed in the BMSCs. Although colony-forming units-fibroblastic numbers in BMSCs were higher than in AMSCs, the expression of multiple stem cell-related genes, like that of fibroblast growth factor 2 (FGF2), the Wnt pathway effectors FRAT1 and frizzled 1, and other self-renewal markers, was greater in AMSCs. Furthermore, AMSCs displayed enhanced osteogenic and adipogenic potential, whereas BMSCs formed chondrocytes more readily than AMSCs. However, by removing the effects of proliferation from the experiment, AMSCs no longer out-performed BMSCs in their ability to undergo osteogenic and adipogenic differentiation. Inhibition of the FGF2/fibroblast growth factor receptor 1 signaling pathway demonstrated that FGF2 is required for the proliferation of both AMSCs and BMSCs, yet blocking FGF2 signaling had no direct effect on osteogenic differentiation. Disclosure of potential conflicts of interest is found at the end of this article.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we describe the development of a three-dimensional (3D) imaging system for a 3500 tonne mining machine (dragline).Draglines are large walking cranes used for removing the dirt that covers a coal seam. Our group has been developing a dragline swing automation system since 1994. The system so far has been `blind' to its external environment. The work presented in this paper attempts to give the dragline an ability to sense its surroundings. A 3D digital terrain map (DTM) is created from data obtained from a two-dimensional laser scanner while the dragline swings. Experimental data from an operational dragline are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Up to 1% of adults will suffer from leg ulceration at some time. The majority of leg ulcers are venous in origin and are caused by high pressure in the veins due to blockage or weakness of the valves in the veins of the leg. Prevention and treatment of venous ulcers is aimed at reducing the pressure either by removing / repairing the veins, or by applying compression bandages / stockings to reduce the pressure in the veins. The vast majority of venous ulcers are healed using compression bandages. Once healed they often recur and so it is customary to continue applying compression in the form of bandages, tights, stockings or socks in order to prevent recurrence. Compression bandages or hosiery (tights, stockings, socks) are often applied for ulcer prevention. Objectives To assess the effects of compression hosiery (socks, stockings, tights) or bandages in preventing the recurrence of venous ulcers. To determine whether there is an optimum pressure/type of compression to prevent recurrence of venous ulcers. Search methods The searches for the review were first undertaken in 2000. For this update we searched the Cochrane Wounds Group Specialised Register (October 2007), The Cochrane Central Register of Controlled Trials (CENTRAL) - The Cochrane Library 2007 Issue 3, Ovid MEDLINE - 1950 to September Week 4 2007, Ovid EMBASE - 1980 to 2007 Week 40 and Ovid CINAHL - 1982 to October Week 1 2007. Selection criteria Randomised controlled trials evaluating compression bandages or hosiery for preventing venous leg ulcers. Data collection and analysis Data extraction and assessment of study quality were undertaken by two authors independently. Results No trials compared recurrence rates with and without compression. One trial (300 patients) compared high (UK Class 3) compression hosiery with moderate (UK Class 2) compression hosiery. A intention to treat analysis found no significant reduction in recurrence at five years follow up associated with high compression hosiery compared with moderate compression hosiery (relative risk of recurrence 0.82, 95% confidence interval 0.61 to 1.12). This analysis would tend to underestimate the effectiveness of the high compression hosiery because a significant proportion of people changed from high compression to medium compression hosiery. Compliance rates were significantly higher with medium compression than with high compression hosiery. One trial (166 patients) found no statistically significant difference in recurrence between two types of medium (UK Class 2) compression hosiery (relative risk of recurrence with Medi was 0.74, 95% confidence interval 0.45 to 1.2). Both trials reported that not wearing compression hosiery was strongly associated with ulcer recurrence and this is circumstantial evidence that compression reduces ulcer recurrence. No trials were found which evaluated compression bandages for preventing ulcer recurrence. Authors' conclusions No trials compared compression with vs no compression for prevention of ulcer recurrence. Not wearing compression was associated with recurrence in both studies identified in this review. This is circumstantial evidence of the benefit of compression in reducing recurrence. Recurrence rates may be lower in high compression hosiery than in medium compression hosiery and therefore patients should be offered the strongest compression with which they can comply. Further trials are needed to determine the effectiveness of hosiery prescribed in other settings, i.e. in the UK community, in countries other than the UK.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Speeding remains a significant contributing factor to road trauma internationally, despite increasingly sophisticated speed management strategies being adopted around the world. Increases in travel speed are associated with increases in crash risk and crash severity. As speed choice is a voluntary behaviour, driver perceptions are important to our understanding of speeding and, importantly, to designing effective behavioural countermeasures. The four studies conducted in this program of research represent a comprehensive approach to examining psychosocial influences on driving speeds in two countries that are at very different levels of road safety development: Australia and China. Akers’ social learning theory (SLT) was selected as the theoretical framework underpinning this research and guided the development of key research hypotheses. This theory was chosen because of its ability to encompass psychological, sociological, and criminological perspectives in understanding behaviour, each of which has relevance to speeding. A mixed-method design was used to explore the personal, social, and legal influences on speeding among car drivers in Queensland (Australia) and Beijing (China). Study 1 was a qualitative exploration, via focus group interviews, of speeding among 67 car drivers recruited from south east Queensland. Participants were assigned to groups based on their age and gender, and additionally, according to whether they self-identified as speeding excessively or rarely. This study aimed to elicit information about how drivers conceptualise speeding as well as the social and legal influences on driving speeds. The findings revealed a wide variety of reasons and circumstances that appear to be used as personal justifications for exceeding speed limits. Driver perceptions of speeding as personally and socially acceptable, as well as safe and necessary were common. Perceptions of an absence of danger associated with faster driving speeds were evident, particularly with respect to driving alone. An important distinction between the speed-based groups related to the attention given to the driving task. Rare speeders expressed strong beliefs about the need to be mindful of safety (self and others) while excessive speeders referred to the driving task as automatic, an absent-minded endeavour, and to speeding as a necessity in order to remain alert and reduce boredom. For many drivers in this study, compliance with speed limits was expressed as discretionary rather than mandatory. Social factors, such as peer and parental influence were widely discussed in Study 1 and perceptions of widespread community acceptance of speeding were noted. In some instances, the perception that ‘everybody speeds’ appeared to act as one rationale for the need to raise speed limits. Self-presentation, or wanting to project a positive image of self was noted, particularly with respect to concealing speeding infringements from others to protect one’s image as a trustworthy and safe driver. The influence of legal factors was also evident. Legal sanctions do not appear to influence all drivers to the same extent. For instance, fear of apprehension appeared to play a role in reducing speeding for many, although previous experiences of detection and legal sanctions seemed to have had limited influence on reducing speeding among some drivers. Disregard for sanctions (e.g., driving while suspended), fraudulent demerit point use, and other strategies to avoid detection and punishment were widely and openly discussed. In Study 2, 833 drivers were recruited from roadside service stations in metropolitan and regional locations in Queensland. A quantitative research strategy assessed the relative contribution of personal, social, and legal factors to recent and future self-reported speeding (i.e., frequency of speeding and intentions to speed in the future). Multivariate analyses examining a range of factors drawn from SLT revealed that factors including self-identity (i.e., identifying as someone who speeds), favourable definitions (attitudes) towards speeding, personal experiences of avoiding detection and punishment for speeding, and perceptions of family and friends as accepting of speeding were all significantly associated with greater self-reported speeding. Study 3 was an exploratory, qualitative investigation of psychosocial factors associated with speeding among 35 Chinese drivers who were recruited from the membership of a motoring organisation and a university in Beijing. Six focus groups were conducted to explore similar issues to those examined in Study 1. The findings of Study 3 revealed many similarities with respect to the themes that arose in Australia. For example, there were similarities regarding personal justifications for speeding, such as the perception that posted limits are unreasonably low, the belief that individual drivers are able to determine safe travel speeds according to personal comfort with driving fast, and the belief that drivers possess adequate skills to control a vehicle at high speed. Strategies to avoid detection and punishment were also noted, though they appeared more widespread in China and also appeared, in some cases, to involve the use of a third party, a topic that was not reported by Australian drivers. Additionally, higher perceived enforcement tolerance thresholds were discussed by Chinese participants. Overall, the findings indicated perceptions of a high degree of community acceptance of speeding and a perceived lack of risk associated with speeds that were well above posted speed limits. Study 4 extended the exploratory research phase in China with a quantitative investigation involving 299 car drivers recruited from car washes in Beijing. Results revealed a relatively inexperienced sample with less than 5 years driving experience, on average. One third of participants perceived that the certainty of penalties when apprehended was low and a similar proportion of Chinese participants reported having previously avoided legal penalties when apprehended for speeding. Approximately half of the sample reported that legal penalties for speeding were ‘minimally to not at all’ severe. Multivariate analyses revealed that past experiences of avoiding detection and punishment for speeding, as well as favourable attitudes towards speeding, and perceptions of strong community acceptance of speeding were most strongly associated with greater self-reported speeding in the Chinese sample. Overall, the results of this research make several important theoretical contributions to the road safety literature. Akers’ social learning theory was found to be robust across cultural contexts with respect to speeding; similar amounts of variance were explained in self-reported speeding in the quantitative studies conducted in Australia and China. Historically, SLT was devised as a theory of deviance and posits that deviance and conformity are learned in the same way, with the balance of influence stemming from the ways in which behaviour is rewarded and punished (Akers, 1998). This perspective suggests that those who speed and those who do not are influenced by the same mechanisms. The inclusion of drivers from both ends of the ‘speeding spectrum’ in Study 1 provided an opportunity to examine the wider utility of SLT across the full range of the behaviour. One may question the use of a theory of deviance to investigate speeding, a behaviour that could, arguably, be described as socially acceptable and prevalent. However, SLT seemed particularly relevant to investigating speeding because of its inclusion of association, imitation, and reinforcement variables which reflect the breadth of factors already found to be potentially influential on driving speeds. In addition, driving is a learned behaviour requiring observation, guidance, and practice. Thus, the reinforcement and imitation concepts are particularly relevant to this behaviour. Finally, current speed management practices are largely enforcement-based and rely on the principles of behavioural reinforcement captured within the reinforcement component of SLT. Thus, the application of SLT to a behaviour such as speeding offers promise in advancing our understanding of the factors that influence speeding, as well as extending our knowledge of the application of SLT. Moreover, SLT could act as a valuable theoretical framework with which to examine other illegal driving behaviours that may not necessarily be seen as deviant by the community (e.g., mobile phone use while driving). This research also made unique contributions to advancing our understanding of the key components and the overall structure of Akers’ social learning theory. The broader SLT literature is lacking in terms of a thorough structural understanding of the component parts of the theory. For instance, debate exists regarding the relevance of, and necessity for including broader social influences in the model as captured by differential association. In the current research, two alternative SLT models were specified and tested in order to better understand the nature and extent of the influence of differential association on behaviour. Importantly, the results indicated that differential association was able to make a unique contribution to explaining self-reported speeding, thereby negating the call to exclude it from the model. The results also demonstrated that imitation was a discrete theoretical concept that should also be retained in the model. The results suggest a need to further explore and specify mechanisms of social influence in the SLT model. In addition, a novel approach was used to operationalise SLT variables by including concepts drawn from contemporary social psychological and deterrence-based research to enhance and extend the way that SLT variables have traditionally been examined. Differential reinforcement was conceptualised according to behavioural reinforcement principles (i.e., positive and negative reinforcement and punishment) and incorporated concepts of affective beliefs, anticipated regret, and deterrence-related concepts. Although implicit in descriptions of SLT, little research has, to date, made use of the broad range of reinforcement principles to understand the factors that encourage or inhibit behaviour. This approach has particular significance to road user behaviours in general because of the deterrence-based nature of many road safety countermeasures. The concept of self-identity was also included in the model and was found to be consistent with the definitions component of SLT. A final theoretical contribution was the specification and testing of a full measurement model prior to model testing using structural equation modelling. This process is recommended in order to reduce measurement error by providing an examination of the psychometric properties of the data prior to full model testing. Despite calls for such work for a number of decades, the current work appears to be the only example of a full measurement model of SLT. There were also a number of important practical implications that emerged from this program of research. Firstly, perceptions regarding speed enforcement tolerance thresholds were highlighted as a salient influence on driving speeds in both countries. The issue of enforcement tolerance levels generated considerable discussion among drivers in both countries, with Australian drivers reporting lower perceived tolerance levels than Chinese drivers. It was clear that many drivers used the concept of an enforcement tolerance in determining their driving speed, primarily with the desire to drive faster than the posted speed limit, yet remaining within a speed range that would preclude apprehension by police. The quantitative results from Studies 2 and 4 added support to these qualitative findings. Together, the findings supported previous research and suggested that a travel speed may not be seen as illegal until that speed reaches a level over the prescribed enforcement tolerance threshold. In other words, the enforcement tolerance appears to act as a ‘de facto’ speed limit, replacing the posted limit in the minds of some drivers. The findings from the two studies conducted in China (Studies 2 and 4) further highlighted the link between perceived enforcement tolerances and a ‘de facto’ speed limit. Drivers openly discussed driving at speeds that were well above posted speed limits and some participants noted their preference for driving at speeds close to ‘50% above’ the posted limit. This preference appeared to be shaped by the perception that the same penalty would be imposed if apprehended, irrespective of what speed they travelling (at least up to 50% above the limit). Further research is required to determine whether the perceptions of Chinese drivers are mainly influenced by the Law of the People’s Republic of China or by operational practices. Together, the findings from both studies in China indicate that there may be scope to refine enforcement tolerance levels, as has happened in other jurisdictions internationally over time, in order to reduce speeding. Any attempts to do so would likely be assisted by the provision of information about the legitimacy and purpose of speed limits as well as risk factors associated with speeding because these issues were raised by Chinese participants in the qualitative research phase. Another important practical implication of this research for speed management in China is the way in which penalties are determined. Chinese drivers described perceptions of unfairness and a lack of transparency in the enforcement system because they were unsure of the penalty that they would receive if apprehended. Steps to enhance the perceived certainty and consistency of the system to promote a more equitable approach to detection and punishment would appear to be welcomed by the general driving public and would be more consistent with the intended theoretical (deterrence) basis that underpins the current speed enforcement approach. The use of mandatory, fixed penalties may assist in this regard. In many countries, speeding attracts penalties that are dependent on the severity of the offence. In China, there may be safety benefits gained from the introduction of a similar graduated scale of speeding penalties and fixed penalties might also help to address the issue of uncertainty about penalties and related perceptions of unfairness. Such advancements would be in keeping with the principles of best practice for speed management as identified by the World Health Organisation. Another practical implication relating to legal penalties, and applicable to both cultural contexts, relates to the issues of detection and punishment avoidance. These two concepts appeared to strongly influence speeding in the current samples. In Australia, detection avoidance strategies reported by participants generally involved activities that are not illegal (e.g., site learning and remaining watchful for police vehicles). The results from China were similar, although a greater range of strategies were reported. The most common strategy reported in both countries for avoiding detection when speeding was site learning, or familiarisation with speed camera locations. However, a range of illegal practices were also described by Chinese drivers (e.g., tampering with or removing vehicle registration plates so as to render the vehicle unidentifiable on camera and use of in-vehicle radar detectors). With regard to avoiding punishment when apprehended, a range of strategies were reported by drivers from both countries, although a greater range of strategies were reported by Chinese drivers. As the results of the current research indicated that detection avoidance was strongly associated with greater self-reported speeding in both samples, efforts to reduce avoidance opportunities are strongly recommended. The practice of randomly scheduling speed camera locations, as is current practice in Queensland, offers one way to minimise site learning. The findings of this research indicated that this practice should continue. However, they also indicated that additional strategies are needed to reduce opportunities to evade detection. The use of point-to-point speed detection (also known as sectio

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the main causes of above knee or transfemoral amputation (TFA) in the developed world is trauma to the limb. The number of people undergoing TFA due to limb trauma, particularly due to war injuries, has been increasing. Typically the trauma amputee population, including war-related amputees, are otherwise healthy, active and desire to return to employment and their usual lifestyle. Consequently there is a growing need to restore long-term mobility and limb function to this population. Traditionally transfemoral amputees are provided with an artificial or prosthetic leg that consists of a fabricated socket, knee joint mechanism and a prosthetic foot. Amputees have reported several problems related to the socket of their prosthetic limb. These include pain in the residual limb, poor socket fit, discomfort and poor mobility. Removing the socket from the prosthetic limb could eliminate or reduce these problems. A solution to this is the direct attachment of the prosthesis to the residual bone (femur) inside the residual limb. This technique has been used on a small population of transfemoral amputees since 1990. A threaded titanium implant is screwed in to the shaft of the femur and a second component connects between the implant and the prosthesis. A period of time is required to allow the implant to become fully attached to the bone, called osseointegration (OI), and be able to withstand applied load; then the prosthesis can be attached. The advantages of transfemoral osseointegration (TFOI) over conventional prosthetic sockets include better hip mobility, sitting comfort and prosthetic retention and fewer skin problems on the residual limb. However, due to the length of time required for OI to progress and to complete the rehabilitation exercises, it can take up to twelve months after implant insertion for an amputee to be able to load bear and to walk unaided. The long rehabilitation time is a significant disadvantage of TFOI and may be impeding the wider adoption of the technique. There is a need for a non-invasive method of assessing the degree of osseointegration between the bone and the implant. If such a method was capable of determining the progression of TFOI and assessing when the implant was able to withstand physiological load it could reduce the overall rehabilitation time. Vibration analysis has been suggested as a potential technique: it is a non destructive method of assessing the dynamic properties of a structure. Changes in the physical properties of a structure can be identified from changes in its dynamic properties. Consequently vibration analysis, both experimental and computational, has been used to assess bone fracture healing, prosthetic hip loosening and dental implant OI with varying degrees of success. More recently experimental vibration analysis has been used in TFOI. However further work is needed to assess the potential of the technique and fully characterise the femur-implant system. The overall aim of this study was to develop physical and computational models of the TFOI femur-implant system and use these models to investigate the feasibility of vibration analysis to detect the process of OI. Femur-implant physical models were developed and manufactured using synthetic materials to represent four key stages of OI development (identified from a physiological model), simulated using different interface conditions between the implant and femur. Experimental vibration analysis (modal analysis) was then conducted using the physical models. The femur-implant models, representing stage one to stage four of OI development, were excited and the modal parameters obtained over the range 0-5kHz. The results indicated the technique had limited capability in distinguishing between different interface conditions. The fundamental bending mode did not alter with interfacial changes. However higher modes were able to track chronological changes in interface condition by the change in natural frequency, although no one modal parameter could uniquely distinguish between each interface condition. The importance of the model boundary condition (how the model is constrained) was the key finding; variations in the boundary condition altered the modal parameters obtained. Therefore the boundary conditions need to be held constant between tests in order for the detected modal parameter changes to be attributed to interface condition changes. A three dimensional Finite Element (FE) model of the femur-implant model was then developed and used to explore the sensitivity of the modal parameters to more subtle interfacial and boundary condition changes. The FE model was created using the synthetic femur geometry and an approximation of the implant geometry. The natural frequencies of the FE model were found to match the experimental frequencies within 20% and the FE and experimental mode shapes were similar. Therefore the FE model was shown to successfully capture the dynamic response of the physical system. As was found with the experimental modal analysis, the fundamental bending mode of the FE model did not alter due to changes in interface elastic modulus. Axial and torsional modes were identified by the FE model that were not detected experimentally; the torsional mode exhibited the largest frequency change due to interfacial changes (103% between the lower and upper limits of the interface modulus range). Therefore the FE model provided additional information on the dynamic response of the system and was complementary to the experimental model. The small changes in natural frequency over a large range of interface region elastic moduli indicated the method may only be able to distinguish between early and late OI progression. The boundary conditions applied to the FE model influenced the modal parameters to a far greater extent than the interface condition variations. Therefore the FE model, as well as the experimental modal analysis, indicated that the boundary conditions need to be held constant between tests in order for the detected changes in modal parameters to be attributed to interface condition changes alone. The results of this study suggest that in a clinical setting it is unlikely that the in vivo boundary conditions of the amputated femur could be adequately controlled or replicated over time and consequently it is unlikely that any longitudinal change in frequency detected by the modal analysis technique could be attributed exclusively to changes at the femur-implant interface. Therefore further development of the modal analysis technique would require significant consideration of the clinical boundary conditions and investigation of modes other than the bending modes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The artwork was created to respond to the exhibition theme, "DIGILOG+IN". It aimed to express the beauty when digital and analogue materials are combined. It visualised an organic harmony between digital and natural objects through digitalisation and builded a fantasy of digital world. However, there was a conceptual dilemma that a “digitalisation” of natural objects into a digital format should merely become a digital work. In other words, a harmony between digital and analogue (natural) can be only achieved through a digitalising process by removing intrinsic nature of analogues. Therefore, the substance of analogues no longer exists in a digitally visualised form, but is virtually represented. The title of art work “digitualisation” is a combined word with “digi-tal” and vir-tualisation”. It refers to a digitally virtualising the substance of natural objects. The artwork visualised the concept of digitualisation by using natural objects (flowers) that are merged within a virtual space (a building entrance foyer).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research underlines the extensive application of nanostructured metal oxides in environmental systems such as hazardous waste remediation and water purification. This study tries to forge a new understanding of the complexity of adsorption and photocatalysis in the process of water treatment. Sodium niobate doped with a different amount of tantalum, was prepared via a hydrothermal reaction and was observed to be able to adsorb highly hazardous bivalent radioactive isotopes such as Sr2+ and Ra2+ions. This study facilitates the preparation of Nb-based adsorbents for efficiently removing toxic radioactive ions from contaminated water and also identifies the importance of understanding the influence of heterovalent substitution in microporous frameworks. Clay adsorbents were prepared via a two-step method to remove anionic and non-ionic herbicides from water. Firstly, layered beidellite clay was treated with acid in a hydrothermal process; secondly, common silane coupling agents, 3-chloro-propyl trimethoxysilane or triethoxy silane, were grafted onto the acid treated samples to prepare the adsorption materials. In order to isolate the effect of the clay surface, we compared the adsorption property of clay adsorbents with ƒ×-Al2O3 nanofibres grafted with the same functional groups. Thin alumina (£^-Al2O3) nanofibres were modified by the grafting of two organosilane agents 3-chloropropyltriethoxysilane and octyl triethoxysilane onto the surface, for the adsorptive removal of alachlor and imazaquin herbicides from water. The formation of organic groups during the functionalisation process established super hydrophobic sites along the surfaces and those non-polar regions of the surfaces were able to make close contact with the organic pollutants. A new structure of anatase crystals linked to clay fragments was synthesised by the reaction of TiOSO4 with laponite clay for the degradation of pesticides. Based on the Ti/clay ratio, these new catalysts showed a high degradation rate when compared with P25. Moreover, immobilized TiO2 on laponite clay fragments could be readily separated out from a slurry system after the photocatalytic reaction. Using a series of partial phase transition methods, an effective catalyst with fibril morphology was prepared for the degradation of different types of phenols and trace amount of herbicides from water. Both H-titanate and TiO2-(B) fibres coated with anatase nanocrystal were studied. When compared with a laponite clay photocatalyst, it was found that anatase dotted TiO2-(B) fibres prepared by a 45 h hydrothermal treatment followed by calcination were not only superior in performance in photocatalysis but could also be readily separated from a slurry system after photocatalytic reactions. This study has laid the foundation for the development of the ability to fabricate highly efficient nanostructured solids for the removal of radioactive ions and organic pollutants from contaminated water. These results now seem set to contribute to the development of advanced water purification devices in the future. These modified nanostructured materials with unusual properties have broadened their application range beyond their traditional use as adsorbents, to also encompass the storage of nuclear waste after concentrating from contaminated water.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines some of the implications for China of the creative industries agenda as drawn by some recent commentators. The creative industries have been seen by many commentators as essential if China is to move from an imitative low-value economy to an innovative high value one. Some suggest that this trajectory is impossible without a full transition to liberal capitalism and democracy - not just removing censorship but instituting 'enlightenment values'. Others suggest that the development of the creative industries themselves will promote social and political change. The paper suggests that the creative industries takes certain elements of a prior cultural industries concept and links it to a new kind of economic development agenda. Though this agenda presents problems for the Chinese government it does not in itself imply the kind of radical democratic political change with which these commentators associate it. In the form in which the creative industries are presented – as part of an informational economy rather than as a cultural politics – it can be accommodated by a Chinese regime doing ‘business as usual’.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, the effect of ions and ultrafine particles on ambient air quality and human health has been well documented, however, knowledge about their sources, concentrations and interactions within different types of urban environments remains limited. This thesis presents the results of numerous field studies aimed at quantifying variations in ion concentration with distance from the source, as well as identifying the dynamics of the particle ionisation processes which lead to the formation of charged particles in the air. In order to select the most appropriate measurement instruments and locations for the studies, a literature review was also conducted on studies that reported ion and ultrafine particle emissions from different sources in a typical urban environment. The initial study involved laboratory experiments on the attachment of ions to aerosols, so as to gain a better understanding of the interaction between ions and particles. This study determined the efficiency of corona ions at charging and removing particles from the air, as a function of different particle number and ion concentrations. The results showed that particle number loss was directly proportional to particle charge concentration, and that higher small ion concentrations led to higher particle deposition rates in all size ranges investigated. Nanoparticles were also observed to decrease with increasing particle charge concentration, due to their higher Brownian mobility and subsequent attachment to charged particles. Given that corona discharge from high voltage powerlines is considered one of the major ion sources in urban areas, a detailed study was then conducted under three parallel overhead powerlines, with a steady wind blowing in a perpendicular direction to the lines. The results showed that large sections of the lines did not produce any corona at all, while strong positive emissions were observed from discrete components such as a particular set of spacers on one of the lines. Measurements were also conducted at eight upwind and downwind points perpendicular to the powerlines, spanning a total distance of about 160m. The maximum positive small and large ion concentrations, and DC electric field were observed at a point 20 m downwind from the lines, with median values of 4.4×103 cm-3, 1.3×103 cm-3 and 530 V m-1, respectively. It was estimated that, at this point, less than 7% of the total number of particles was charged. The electrical parameters decreased steadily with increasing downwind distance from the lines but remained significantly higher than background levels at the limit of the measurements. Moreover, vehicles are one of the most prevalent ion and particle emitting sources in urban environments, and therefore, experiments were also conducted behind a motor vehicle exhaust pipe and near busy motorways, with the aim of quantifying small ion and particle charge concentration, as well as their distribution as a function of distance from the source. The study found that approximately equal numbers of positive and negative ions were observed in the vehicle exhaust plume, as well as near motorways, of which heavy duty vehicles were believed to be the main contributor. In addition, cluster ion concentration was observed to decrease rapidly within the first 10-15 m from the road and ion-ion recombination and ion-aerosol attachment were the most likely cause of ion depletion, rather than dilution and turbulence related processes. In addition to the above-mentioned dominant ion sources, other sources also exist within urban environments where intensive human activities take place. In this part of the study, airborne concentrations of small ions, particles and net particle charge were measured at 32 different outdoor sites in and around Brisbane, Australia, which were classified into seven different groups as follows: park, woodland, city centre, residential, freeway, powerlines and power substation. Whilst the study confirmed that powerlines, power substations and freeways were the main ion sources in an urban environment, it also suggested that not all powerlines emitted ions, only those with discrete corona discharge points. In addition to the main ion sources, higher ion concentrations were also observed environments affected by vehicle traffic and human activities, such as the city centre and residential areas. A considerable number of ions were also observed in a woodland area and it is still unclear if they were emitted directly from the trees, or if they originated from some other local source. Overall, it was found that different types of environments had different types of ion sources, which could be classified as unipolar or bipolar particle sources, as well as ion sources that co-exist with particle sources. In general, fewer small ions were observed at sites with co-existing sources, however particle charge was often higher due to the effect of ion-particle attachment. In summary, this study quantified ion concentrations in typical urban environments, identified major charge sources in urban areas, and determined the spatial dispersion of ions as a function of distance from the source, as well as their controlling factors. The study also presented ion-aerosol attachment efficiencies under high ion concentration conditions, both in the laboratory and in real outdoor environments. The outcomes of these studies addressed the aims of this work and advanced understanding of the charge status of aerosols in the urban environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The World Health Organisation has highlighted the urgent need to address the escalating global public health crisis associated with road trauma. Low-income and middle-income countries bear the brunt of this, and rapid increases in private vehicle ownership in these nations present new challenges to authorities, citizens, and researchers alike. The role of human factors in the road safety equation is high. In China, human factors have been implicated in more than 90% of road crashes, with speeding identified as the primary cause (Wang, 2003). However, research investigating the factors that influence driving speeds in China is lacking (WHO, 2004). To help address this gap, we present qualitative findings from group interviews conducted with 35 Beijing car drivers in 2008. Some themes arising from data analysis showed strong similarities with findings from highly-motorised nations (e.g., UK, USA, and Australia) and include issues such as driver definitions of ‘speeding’ that appear to be aligned with legislative enforcement tolerances, factors relating to ease/difficulty of speed limit compliance, and the modifying influence of speed cameras. However, unique differences were evident, some of which, to our knowledge, are previously unreported in research literature. Themes included issues relating to an expressed lack of understanding about why speed limits are necessary and a perceived lack of transparency in traffic law enforcement and use of associated revenue. The perception of an unfair system seemed related to issues such as differential treatment of certain drivers and the large amount of individual discretion available to traffic police when administering sanctions. Additionally, a wide range of strategies to overtly avoid detection for speeding and/or the associated sanctions were reported. These strategies included the use of in-vehicle speed camera detectors, covering or removing vehicle licence number plates, and using personal networks of influential people to reduce or cancel a sanction. These findings have implications for traffic law, law enforcement, driver training, and public education in China. While not representative of all Beijing drivers, we believe that these research findings offer unique insights into driver behaviour in China.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Anybody who has attempted to publish some aspect of their work in an academic journal will know that it isn’t as easy as it may seem. The amount of preparation required of a manuscript can be quite daunting. Besides actually writing the manuscript, the authors are faced with a number of technical requirements. Each journal has their own formatting requirements, relating not only to section headings and text layout, but also to very small details such as placement of commas in reference lists. Then, if presenting data in the form of figures, this must be formatted so that it can be understood by the readership, and most journals still require that the data be in a format which can be read when printed in black-and-white. Most daunting (and important) of all, for the article to be scientifically valid it must be absolutely true in the representation of the work reported (i.e. all data must be shown unless a strong justification exists for removing data points), and this might cause angst in the mind of the authors when the results aren’t clear or possibly contradict the expected or desired result.