993 resultados para removing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the main causes of above knee or transfemoral amputation (TFA) in the developed world is trauma to the limb. The number of people undergoing TFA due to limb trauma, particularly due to war injuries, has been increasing. Typically the trauma amputee population, including war-related amputees, are otherwise healthy, active and desire to return to employment and their usual lifestyle. Consequently there is a growing need to restore long-term mobility and limb function to this population. Traditionally transfemoral amputees are provided with an artificial or prosthetic leg that consists of a fabricated socket, knee joint mechanism and a prosthetic foot. Amputees have reported several problems related to the socket of their prosthetic limb. These include pain in the residual limb, poor socket fit, discomfort and poor mobility. Removing the socket from the prosthetic limb could eliminate or reduce these problems. A solution to this is the direct attachment of the prosthesis to the residual bone (femur) inside the residual limb. This technique has been used on a small population of transfemoral amputees since 1990. A threaded titanium implant is screwed in to the shaft of the femur and a second component connects between the implant and the prosthesis. A period of time is required to allow the implant to become fully attached to the bone, called osseointegration (OI), and be able to withstand applied load; then the prosthesis can be attached. The advantages of transfemoral osseointegration (TFOI) over conventional prosthetic sockets include better hip mobility, sitting comfort and prosthetic retention and fewer skin problems on the residual limb. However, due to the length of time required for OI to progress and to complete the rehabilitation exercises, it can take up to twelve months after implant insertion for an amputee to be able to load bear and to walk unaided. The long rehabilitation time is a significant disadvantage of TFOI and may be impeding the wider adoption of the technique. There is a need for a non-invasive method of assessing the degree of osseointegration between the bone and the implant. If such a method was capable of determining the progression of TFOI and assessing when the implant was able to withstand physiological load it could reduce the overall rehabilitation time. Vibration analysis has been suggested as a potential technique: it is a non destructive method of assessing the dynamic properties of a structure. Changes in the physical properties of a structure can be identified from changes in its dynamic properties. Consequently vibration analysis, both experimental and computational, has been used to assess bone fracture healing, prosthetic hip loosening and dental implant OI with varying degrees of success. More recently experimental vibration analysis has been used in TFOI. However further work is needed to assess the potential of the technique and fully characterise the femur-implant system. The overall aim of this study was to develop physical and computational models of the TFOI femur-implant system and use these models to investigate the feasibility of vibration analysis to detect the process of OI. Femur-implant physical models were developed and manufactured using synthetic materials to represent four key stages of OI development (identified from a physiological model), simulated using different interface conditions between the implant and femur. Experimental vibration analysis (modal analysis) was then conducted using the physical models. The femur-implant models, representing stage one to stage four of OI development, were excited and the modal parameters obtained over the range 0-5kHz. The results indicated the technique had limited capability in distinguishing between different interface conditions. The fundamental bending mode did not alter with interfacial changes. However higher modes were able to track chronological changes in interface condition by the change in natural frequency, although no one modal parameter could uniquely distinguish between each interface condition. The importance of the model boundary condition (how the model is constrained) was the key finding; variations in the boundary condition altered the modal parameters obtained. Therefore the boundary conditions need to be held constant between tests in order for the detected modal parameter changes to be attributed to interface condition changes. A three dimensional Finite Element (FE) model of the femur-implant model was then developed and used to explore the sensitivity of the modal parameters to more subtle interfacial and boundary condition changes. The FE model was created using the synthetic femur geometry and an approximation of the implant geometry. The natural frequencies of the FE model were found to match the experimental frequencies within 20% and the FE and experimental mode shapes were similar. Therefore the FE model was shown to successfully capture the dynamic response of the physical system. As was found with the experimental modal analysis, the fundamental bending mode of the FE model did not alter due to changes in interface elastic modulus. Axial and torsional modes were identified by the FE model that were not detected experimentally; the torsional mode exhibited the largest frequency change due to interfacial changes (103% between the lower and upper limits of the interface modulus range). Therefore the FE model provided additional information on the dynamic response of the system and was complementary to the experimental model. The small changes in natural frequency over a large range of interface region elastic moduli indicated the method may only be able to distinguish between early and late OI progression. The boundary conditions applied to the FE model influenced the modal parameters to a far greater extent than the interface condition variations. Therefore the FE model, as well as the experimental modal analysis, indicated that the boundary conditions need to be held constant between tests in order for the detected changes in modal parameters to be attributed to interface condition changes alone. The results of this study suggest that in a clinical setting it is unlikely that the in vivo boundary conditions of the amputated femur could be adequately controlled or replicated over time and consequently it is unlikely that any longitudinal change in frequency detected by the modal analysis technique could be attributed exclusively to changes at the femur-implant interface. Therefore further development of the modal analysis technique would require significant consideration of the clinical boundary conditions and investigation of modes other than the bending modes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The artwork was created to respond to the exhibition theme, "DIGILOG+IN". It aimed to express the beauty when digital and analogue materials are combined. It visualised an organic harmony between digital and natural objects through digitalisation and builded a fantasy of digital world. However, there was a conceptual dilemma that a “digitalisation” of natural objects into a digital format should merely become a digital work. In other words, a harmony between digital and analogue (natural) can be only achieved through a digitalising process by removing intrinsic nature of analogues. Therefore, the substance of analogues no longer exists in a digitally visualised form, but is virtually represented. The title of art work “digitualisation” is a combined word with “digi-tal” and vir-tualisation”. It refers to a digitally virtualising the substance of natural objects. The artwork visualised the concept of digitualisation by using natural objects (flowers) that are merged within a virtual space (a building entrance foyer).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research underlines the extensive application of nanostructured metal oxides in environmental systems such as hazardous waste remediation and water purification. This study tries to forge a new understanding of the complexity of adsorption and photocatalysis in the process of water treatment. Sodium niobate doped with a different amount of tantalum, was prepared via a hydrothermal reaction and was observed to be able to adsorb highly hazardous bivalent radioactive isotopes such as Sr2+ and Ra2+ions. This study facilitates the preparation of Nb-based adsorbents for efficiently removing toxic radioactive ions from contaminated water and also identifies the importance of understanding the influence of heterovalent substitution in microporous frameworks. Clay adsorbents were prepared via a two-step method to remove anionic and non-ionic herbicides from water. Firstly, layered beidellite clay was treated with acid in a hydrothermal process; secondly, common silane coupling agents, 3-chloro-propyl trimethoxysilane or triethoxy silane, were grafted onto the acid treated samples to prepare the adsorption materials. In order to isolate the effect of the clay surface, we compared the adsorption property of clay adsorbents with ƒ×-Al2O3 nanofibres grafted with the same functional groups. Thin alumina (£^-Al2O3) nanofibres were modified by the grafting of two organosilane agents 3-chloropropyltriethoxysilane and octyl triethoxysilane onto the surface, for the adsorptive removal of alachlor and imazaquin herbicides from water. The formation of organic groups during the functionalisation process established super hydrophobic sites along the surfaces and those non-polar regions of the surfaces were able to make close contact with the organic pollutants. A new structure of anatase crystals linked to clay fragments was synthesised by the reaction of TiOSO4 with laponite clay for the degradation of pesticides. Based on the Ti/clay ratio, these new catalysts showed a high degradation rate when compared with P25. Moreover, immobilized TiO2 on laponite clay fragments could be readily separated out from a slurry system after the photocatalytic reaction. Using a series of partial phase transition methods, an effective catalyst with fibril morphology was prepared for the degradation of different types of phenols and trace amount of herbicides from water. Both H-titanate and TiO2-(B) fibres coated with anatase nanocrystal were studied. When compared with a laponite clay photocatalyst, it was found that anatase dotted TiO2-(B) fibres prepared by a 45 h hydrothermal treatment followed by calcination were not only superior in performance in photocatalysis but could also be readily separated from a slurry system after photocatalytic reactions. This study has laid the foundation for the development of the ability to fabricate highly efficient nanostructured solids for the removal of radioactive ions and organic pollutants from contaminated water. These results now seem set to contribute to the development of advanced water purification devices in the future. These modified nanostructured materials with unusual properties have broadened their application range beyond their traditional use as adsorbents, to also encompass the storage of nuclear waste after concentrating from contaminated water.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines some of the implications for China of the creative industries agenda as drawn by some recent commentators. The creative industries have been seen by many commentators as essential if China is to move from an imitative low-value economy to an innovative high value one. Some suggest that this trajectory is impossible without a full transition to liberal capitalism and democracy - not just removing censorship but instituting 'enlightenment values'. Others suggest that the development of the creative industries themselves will promote social and political change. The paper suggests that the creative industries takes certain elements of a prior cultural industries concept and links it to a new kind of economic development agenda. Though this agenda presents problems for the Chinese government it does not in itself imply the kind of radical democratic political change with which these commentators associate it. In the form in which the creative industries are presented – as part of an informational economy rather than as a cultural politics – it can be accommodated by a Chinese regime doing ‘business as usual’.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, the effect of ions and ultrafine particles on ambient air quality and human health has been well documented, however, knowledge about their sources, concentrations and interactions within different types of urban environments remains limited. This thesis presents the results of numerous field studies aimed at quantifying variations in ion concentration with distance from the source, as well as identifying the dynamics of the particle ionisation processes which lead to the formation of charged particles in the air. In order to select the most appropriate measurement instruments and locations for the studies, a literature review was also conducted on studies that reported ion and ultrafine particle emissions from different sources in a typical urban environment. The initial study involved laboratory experiments on the attachment of ions to aerosols, so as to gain a better understanding of the interaction between ions and particles. This study determined the efficiency of corona ions at charging and removing particles from the air, as a function of different particle number and ion concentrations. The results showed that particle number loss was directly proportional to particle charge concentration, and that higher small ion concentrations led to higher particle deposition rates in all size ranges investigated. Nanoparticles were also observed to decrease with increasing particle charge concentration, due to their higher Brownian mobility and subsequent attachment to charged particles. Given that corona discharge from high voltage powerlines is considered one of the major ion sources in urban areas, a detailed study was then conducted under three parallel overhead powerlines, with a steady wind blowing in a perpendicular direction to the lines. The results showed that large sections of the lines did not produce any corona at all, while strong positive emissions were observed from discrete components such as a particular set of spacers on one of the lines. Measurements were also conducted at eight upwind and downwind points perpendicular to the powerlines, spanning a total distance of about 160m. The maximum positive small and large ion concentrations, and DC electric field were observed at a point 20 m downwind from the lines, with median values of 4.4×103 cm-3, 1.3×103 cm-3 and 530 V m-1, respectively. It was estimated that, at this point, less than 7% of the total number of particles was charged. The electrical parameters decreased steadily with increasing downwind distance from the lines but remained significantly higher than background levels at the limit of the measurements. Moreover, vehicles are one of the most prevalent ion and particle emitting sources in urban environments, and therefore, experiments were also conducted behind a motor vehicle exhaust pipe and near busy motorways, with the aim of quantifying small ion and particle charge concentration, as well as their distribution as a function of distance from the source. The study found that approximately equal numbers of positive and negative ions were observed in the vehicle exhaust plume, as well as near motorways, of which heavy duty vehicles were believed to be the main contributor. In addition, cluster ion concentration was observed to decrease rapidly within the first 10-15 m from the road and ion-ion recombination and ion-aerosol attachment were the most likely cause of ion depletion, rather than dilution and turbulence related processes. In addition to the above-mentioned dominant ion sources, other sources also exist within urban environments where intensive human activities take place. In this part of the study, airborne concentrations of small ions, particles and net particle charge were measured at 32 different outdoor sites in and around Brisbane, Australia, which were classified into seven different groups as follows: park, woodland, city centre, residential, freeway, powerlines and power substation. Whilst the study confirmed that powerlines, power substations and freeways were the main ion sources in an urban environment, it also suggested that not all powerlines emitted ions, only those with discrete corona discharge points. In addition to the main ion sources, higher ion concentrations were also observed environments affected by vehicle traffic and human activities, such as the city centre and residential areas. A considerable number of ions were also observed in a woodland area and it is still unclear if they were emitted directly from the trees, or if they originated from some other local source. Overall, it was found that different types of environments had different types of ion sources, which could be classified as unipolar or bipolar particle sources, as well as ion sources that co-exist with particle sources. In general, fewer small ions were observed at sites with co-existing sources, however particle charge was often higher due to the effect of ion-particle attachment. In summary, this study quantified ion concentrations in typical urban environments, identified major charge sources in urban areas, and determined the spatial dispersion of ions as a function of distance from the source, as well as their controlling factors. The study also presented ion-aerosol attachment efficiencies under high ion concentration conditions, both in the laboratory and in real outdoor environments. The outcomes of these studies addressed the aims of this work and advanced understanding of the charge status of aerosols in the urban environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The World Health Organisation has highlighted the urgent need to address the escalating global public health crisis associated with road trauma. Low-income and middle-income countries bear the brunt of this, and rapid increases in private vehicle ownership in these nations present new challenges to authorities, citizens, and researchers alike. The role of human factors in the road safety equation is high. In China, human factors have been implicated in more than 90% of road crashes, with speeding identified as the primary cause (Wang, 2003). However, research investigating the factors that influence driving speeds in China is lacking (WHO, 2004). To help address this gap, we present qualitative findings from group interviews conducted with 35 Beijing car drivers in 2008. Some themes arising from data analysis showed strong similarities with findings from highly-motorised nations (e.g., UK, USA, and Australia) and include issues such as driver definitions of ‘speeding’ that appear to be aligned with legislative enforcement tolerances, factors relating to ease/difficulty of speed limit compliance, and the modifying influence of speed cameras. However, unique differences were evident, some of which, to our knowledge, are previously unreported in research literature. Themes included issues relating to an expressed lack of understanding about why speed limits are necessary and a perceived lack of transparency in traffic law enforcement and use of associated revenue. The perception of an unfair system seemed related to issues such as differential treatment of certain drivers and the large amount of individual discretion available to traffic police when administering sanctions. Additionally, a wide range of strategies to overtly avoid detection for speeding and/or the associated sanctions were reported. These strategies included the use of in-vehicle speed camera detectors, covering or removing vehicle licence number plates, and using personal networks of influential people to reduce or cancel a sanction. These findings have implications for traffic law, law enforcement, driver training, and public education in China. While not representative of all Beijing drivers, we believe that these research findings offer unique insights into driver behaviour in China.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Anybody who has attempted to publish some aspect of their work in an academic journal will know that it isn’t as easy as it may seem. The amount of preparation required of a manuscript can be quite daunting. Besides actually writing the manuscript, the authors are faced with a number of technical requirements. Each journal has their own formatting requirements, relating not only to section headings and text layout, but also to very small details such as placement of commas in reference lists. Then, if presenting data in the form of figures, this must be formatted so that it can be understood by the readership, and most journals still require that the data be in a format which can be read when printed in black-and-white. Most daunting (and important) of all, for the article to be scientifically valid it must be absolutely true in the representation of the work reported (i.e. all data must be shown unless a strong justification exists for removing data points), and this might cause angst in the mind of the authors when the results aren’t clear or possibly contradict the expected or desired result.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Customer perceived value is concerned with the experiences of consumers when using a service and is often referred to in the context of service provision or on the basis of service quality (Auh, et al., 2007; Chang, 2008; Jackson, 2007; Laukkanen, 2007; Padgett & Mulvey, 2007; Shamdasani, Mukherjee & Malhotra, 2008). Understanding customer perceived value has benefits for social marketing and allows scholars and practitioners alike to identify why consumers engage in positive social behaviours through the use of services. Understanding consumers’ use of wellness services in particular is important, because the use of wellness services demonstrates the fulfilment of social marketing aims; performing pro-active, positive social behaviours that are of benefit to the individual and to society (Andreasen, 1994). As consumers typically act out of self-interest (Rothschild, 1999), this research posits that a value proposition must be made to consumers in order to encourage behavioural change. Thus, this research seeks to identify how value is created for consumers of wellness services in social marketing. This results in the overall research question of this research: How is value created in social marketing wellness services? A traditional method towards understanding value has been the adoption of an economic approach, which considers the utility gained and where value is a direct outcome of a cost-benefit analysis (Payne & Holt, 1999). However, there has since been a shift towards the adoption of an experiential approach in understanding value. This experiential approach considers the consumption experience of the consumer which extends beyond the service exchange and includes pre- and post-consumption stages (Russell-Bennett, Previte & Zainuddin, 2009). As such, this research uses an experiential approach to identify the value that exists in social marketing wellness services. Four dimensions of value have been commonly conceptualised and identified in the commercial marketing literature; functional, emotional, social, and altruistic value (Holbrook, 1994; Sheth, Newman & Gross, 1991; Sweeney & Soutar, 2001). It is not known if these value dimensions also exist in social marketing. In addition, sources of value said to influence value dimensions have been conceptualised in the literature. Sources of value such as information, interaction, environment, service, customer co-creation, and social mandate have been conceptually identified both in the commercial and social marketing literature (Russell-Bennet, Previte & Zainuddin, 2009; Smith & Colgate, 2007). However, it is not clear which sources of value contribute to the creation of value for users of wellness services. Thus, this research seeks to explore these relationships. This research was conducted using a wellness service context, specifically breast cancer screening services. The primary target consumer of these services is women aged 50 to 69 years old (inclusive) who have never been diagnosed with breast cancer. It is recommended that women in this target group have a breast screen every 2 years in order to achieve the most effective medical outcomes from screening. A two-study mixed method approach was utilised. Study 1 was a qualitative exploratory study that analysed individual-depth interviews with 25 information-rich respondents. The interviews were transcribed verbatim and analysed using NVivo 8 software. The qualitative results provided evidence of the existence of the four value dimensions in social marketing. The results also allowed for the development of a typology of experiential value by synthesising current understanding of the value dimensions, with the activity aspects of experiential value identified by Holbrook (1994) and Mathwick, Malhotra and Rigdon (2001). The qualitative results also provided evidence for the existence of sources of value in social marketing, namely information, interaction, environment and consumer participation. In particular, a categorisation of sources of value was developed as a result of the findings from Study 1, which identify organisational, consumer, and third party sources of value. A proposed model of value co-creation and a set of hypotheses were developed based on the results of Study 1 for further testing in Study 2. Study 2 was a large-scale quantitative confirmatory study that sought to test the proposed model of value co-creation and the hypotheses developed. An online-survey was administered Australia-wide to women in the target audience. A response rate of 20.1% was achieved, resulting in a final sample of 797 useable responses after removing ineligible respondents. Reliability and validity analyses were conducted on the data, followed by Exploratory Factor Analysis (EFA) in PASW18, followed by Confirmatory Factor Analysis (CFA) in AMOS18. Following the preliminary analyses, the data was subject to Structural Equation Modelling (SEM) in AMOS18 to test the path relationships hypothesised in the proposed model of value creation. The SEM output revealed that all hypotheses were supported, with the exception of one relationship which was non-significant. In addition, post hoc tests revealed seven further significant non-hypothesised relationships in the model. The quantitative results show that organisational sources of value as well as consumer participation sources of value influence both functional and emotional dimensions of value. The experience of both functional and emotional value in wellness services leads to satisfaction with the experience, followed by behavioural intentions to perform the behaviour and use the service again. One of the significant non-hypothesised relationships revealed that emotional value leads to functional value in wellness services, providing further empirical evidence that emotional value features more prominently than functional value for users of wellness services. This research offers several contributions to theory and practice. Theoretically, this research addresses a gap in the literature by using social marketing theory to provide an alternative method of understanding individual behaviour in a domain that has been predominantly investigated in public health. This research also clarifies the concept of value and offers empirical evidence to show that value is a multi-dimensional construct with separate and distinct dimensions. Empirical evidence for a typology of experiential value, as well as a categorisation of sources of value is also provided. In its practical contributions, this research identifies a framework that is the value creation process and offers health services organisations a diagnostic tool to identify aspects of the service process that facilitate the value creation process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Local governments struggle to engage time poor and seemingly apathetic citizens, as well as the city’s young digital natives, the digital locals. This project aims at providing a lightweight, technological contribution towards removing the hierarchy between those who build the city and those who use it. We aim to narrow this gap by enhancing people’s experience of physical spaces with digital, civic technologies that are directly accessible within that space. This paper presents the findings of a design trial allowing users to interact with a public screen via their mobile phones. The screen facilitated a feedback platform about a concrete urban planning project by promoting specific questions and encouraging direct, in-situ, real-time responses via SMS and twitter. This new mechanism offers additional benefits for civic participation as it gives voice to residents who otherwise would not be heard. It also promotes a positive attitude towards local governments and gathers information different from more traditional public engagement tools.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Problems associated with processing whole sugarcane crop can be minimised by removing impurities during the clarification stage. As a first step, it is important to understand the colloidal chemistry of juice particles on a molecular level to assist development strategies for effective clarification performance. This paper presents the composition and surface characteristics of colloidal particles originating from various juice types by using scanning electron microscopy with energy-dispersive X-ray spectroscopy (SEM-EDX), X-ray photoelectron spectroscopy (XPS) and zeta potential measurements. The composition and surface characteristics of colloidal juice particles are reported. The results indicate that there are three types of colloidal particles present viz., an aluminosilicate compound, silica and iron oxide, with the latter two being abundant. Proteins, polysaccharides and organic acids were identified on the surface of particles in juice. The overall particle charge varies from –2 mV to –6 mV. In comparison to juice expressed from burnt cane, the zeta potential values were more negative with juice particles originating from whole crop. This in part explains why these juices are difficult to clarify.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The opening phrase of the title is from Charles Darwin’s notebooks (Schweber 1977). It is a double reminder, firstly that mainstream evolutionary theory is not just about describing nature but is particularly looking for mechanisms or ‘causes’, and secondly, that there will usually be several causes affecting any particular outcome. The second part of the title is our concern at the almost universal rejection of the idea that biological mechanisms are sufficient for macroevolutionary changes, thus rejecting a cornerstone of Darwinian evolutionary theory. Our primary aim here is to consider ways of making it easier to develop and to test hypotheses about evolution. Formalizing hypotheses can help generate tests. In an absolute sense, some of the discussion by scientists about evolution is little better than the lack of reasoning used by those advocating intelligent design. Our discussion here is in a Popperian framework where science is defined by that area of study where it is possible, in principle, to find evidence against hypotheses – they are in principle falsifiable. However, with time, the boundaries of science keep expanding. In the past, some aspects of evolution were outside the current boundaries of falsifiable science, but increasingly new techniques and ideas are expanding the boundaries of science and it is appropriate to re-examine some topics. It often appears that over the last few decades there has been an increasingly strong assumption to look first (and only) for a physical cause. This decision is virtually never formally discussed, just an assumption is made that some physical factor ‘drives’ evolution. It is necessary to examine our assumptions much more carefully. What is meant by physical factors ‘driving’ evolution, or what is an ‘explosive radiation’. Our discussion focuses on two of the six mass extinctions, the fifth being events in the Late Cretaceous, and the sixth starting at least 50,000 years ago (and is ongoing). Cretaceous/Tertiary boundary; the rise of birds and mammals. We have had a long-term interest (Cooper and Penny 1997) in designing tests to help evaluate whether the processes of microevolution are sufficient to explain macroevolution. The real challenge is to formulate hypotheses in a testable way. For example the numbers of lineages of birds and mammals that survive from the Cretaceous to the present is one test. Our first estimate was 22 for birds, and current work is tending to increase this value. This still does not consider lineages that survived into the Tertiary, and then went extinct later. Our initial suggestion was probably too narrow in that it lumped four models from Penny and Phillips (2004) into one model. This reduction is too simplistic in that we need to know about survival and ecological and morphological divergences during the Late Cretaceous, and whether Crown groups of avian or mammalian orders may have existed back into the Cretaceous. More recently (Penny and Phillips 2004) we have formalized hypotheses about dinosaurs and pterosaurs, with the prediction that interactions between mammals (and groundfeeding birds) and dinosaurs would be most likely to affect the smallest dinosaurs, and similarly interactions between birds and pterosaurs would particularly affect the smaller pterosaurs. There is now evidence for both classes of interactions, with the smallest dinosaurs and pterosaurs declining first, as predicted. Thus, testable models are now possible. Mass extinction number six: human impacts. On a broad scale, there is a good correlation between time of human arrival, and increased extinctions (Hurles et al. 2003; Martin 2005; Figure 1). However, it is necessary to distinguish different time scales (Penny 2005) and on a finer scale there are still large numbers of possibilities. In Hurles et al. (2003) we mentioned habitat modification (including the use of Geogenes III July 2006 31 fire), introduced plants and animals (including kiore) in addition to direct predation (the ‘overkill’ hypothesis). We need also to consider prey switching that occurs in early human societies, as evidenced by the results of Wragg (1995) on the middens of different ages on Henderson Island in the Pitcairn group. In addition, the presence of human-wary or humanadapted animals will affect the distribution in the subfossil record. A better understanding of human impacts world-wide, in conjunction with pre-scientific knowledge will make it easier to discuss the issues by removing ‘blame’. While continued spontaneous generation was accepted universally, there was the expectation that animals continued to reappear. New Zealand is one of the very best locations in the world to study many of these issues. Apart from the marine fossil record, some human impact events are extremely recent and the remains less disrupted by time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Local governments struggle to engage time poor and seemingly apathetic citizens, as well as the city's young digital natives, the digital locals. Capturing the attention of this digitally literate community who are technology and socially savvy adds a new quality to the challenge of community engagement for urban planning. This project developed and tested a lightweight design intervention towards removing the hierarchy between those who plan the city and those who use it. The aim is to narrow this gap by enhancing people's experience of physical spaces with digital, civic technologies that are directly accessible within that space. The study's research informed the development of a public screen system called Discussions In Space (DIS). It facilitates a feedback platform about specific topics, e.g., a concrete urban planning project, and encourages direct, in-situ, real-time user responses via SMS and Twitter. The thesis presents the findings of deploying and integrating DIS in a wide range of public and urban environments, including the iconic urban screen at Federation Square in Melbourne, to explore the Human-Computer Interaction (HCI) related challenges and implications. It was also deployed in conjunction with a major urban planning project in Brisbane to explore the system's opportunities and challenges of better engaging with Australia's new digital locals. Finally, the merits of the short-texted and ephemeral data generated by the system were evaluated in three focus groups with professional urban planners. DIS offers additional benefits for civic participation as it gives voice to residents who otherwise would not be easily heard. It also promotes a positive attitude towards local governments and gathers complementary information that is different than that captured by more traditional public engagement tools.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose Managers generally have discretion in determining how components of earnings are presented in financial statements in distinguishing between ‘normal’ earnings and items classified as unusual, special, significant, exceptional or abnormal. Prior research has found that such intra-period classificatory choice is used as a form of earnings management. Prior to 2001, Australian accounting standards mandated that unusually large items of revenue and expense be classified as ‘abnormal items’ for financial reporting, but this classification was removed from accounting standards from 2001. This move by the regulators was partly in response to concerns that the abnormal classification was being used opportunistically to manage reported pre-abnormal earnings. This study extends the earnings management literature by examining the reporting of abnormal items for evidence of intra-period classificatory earnings management in the unique Australian setting. Design/methodology/approach This study investigates associations between reporting of abnormal items and incentives in the form of analyst following and the earnings benchmarks of analysts’ forecasts, earnings levels, and earnings changes, for a sample of Australian top-500 firms for the seven-year period from 1994 to 2000. Findings The findings suggest there are systematic differences between firms reporting abnormal items and those with no abnormal items. Results show evidence that, on average, firms shifted expense items from pre-abnormal earnings to bottom line net income through reclassification as abnormal losses. Originality/value These findings suggest that the standard setters were justified in removing the ‘abnormal’ classification from the accounting standard. However, it cannot be assumed that all firms acted opportunistically in the classification of items as abnormal. With the removal of the standardised classification of items outside normal operations as ‘abnormal’, firms lost the opportunity to use such disclosures as a signalling device, with the consequential effect of limiting the scope of effectively communicating information about the nature of items presented in financial reports.