935 resultados para hierarchical winner-take-all
Resumo:
Microorganisms play key roles in biogeochemical cycling by facilitating the release of nutrients from organic compounds. In doing so, microbial communities use different organic substrates that yield different amounts of energy for maintenance and growth of the community. Carbon utilization efficiency (CUE) is a measure of the efficiency with which substrate carbon is metabolized versus mineralized by the microbial biomass. In the face of global change, we wanted to know how temperature affected the efficiency by which the soil microbial community utilized an added labile substrate, and to determine the effect of labile soil carbon depletion (through increasing duration of incubation) on the community's ability to respond to an added substrate. Cellobiose was added to soil samples as a model compound at several times over the course of a long-term incubation experiment to measure the amount of carbon assimilated or lost as CO2 respiration. Results indicated that in all cases, the time required for the microbial community to take up the added substrate increased as incubation time prior to substrate addition increased. However, the CUE was not affected by incubation time. Increased temperature generally decreased CUE, thus the microbial community was more efficient at 15 degrees C than at 25 degrees C. These results indicate that at warmer temperatures microbial communities may release more CO2 per unit of assimilated carbon. Current climate-carbon models have a fixed CUE to predict how much CO2 will be released as soil organic matter is decomposed. Based on our findings, this assumption may be incorrect due to variation of CUE with changing temperature. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Architecture for a Free Subjectivity reformulates the French philosopher Gilles Deleuze's model of subjectivity for architecture, by surveying the prolific effects of architectural encounter, and the spaces that figure in them. For Deleuze and his Lacanian collaborator Félix Guattari, subjectivity does not refer to a person, but to the potential for and event of matter becoming subject, and the myriad ways for this to take place. By extension, this book theorizes architecture as a self-actuating or creative agency for the liberation of purely "impersonal effects." Imagine a chemical reaction, a riot in the banlieues, indeed a walk through a city. Simone Brott declares that the architectural object does not merely take part in the production of subjectivity, but that it constitutes its own.
Resumo:
It is important to examine the nature of the relationships between roadway, environmental, and traffic factors and motor vehicle crashes, with the aim to improve the collective understanding of causal mechanisms involved in crashes and to better predict their occurrence. Statistical models of motor vehicle crashes are one path of inquiry often used to gain these initial insights. Recent efforts have focused on the estimation of negative binomial and Poisson regression models (and related deviants) due to their relatively good fit to crash data. Of course analysts constantly seek methods that offer greater consistency with the data generating mechanism (motor vehicle crashes in this case), provide better statistical fit, and provide insight into data structure that was previously unavailable. One such opportunity exists with some types of crash data, in particular crash-level data that are collected across roadway segments, intersections, etc. It is argued in this paper that some crash data possess hierarchical structure that has not routinely been exploited. This paper describes the application of binomial multilevel models of crash types using 548 motor vehicle crashes collected from 91 two-lane rural intersections in the state of Georgia. Crash prediction models are estimated for angle, rear-end, and sideswipe (both same direction and opposite direction) crashes. The contributions of the paper are the realization of hierarchical data structure and the application of a theoretically appealing and suitable analysis approach for multilevel data, yielding insights into intersection-related crashes by crash type.
Resumo:
The concept of asset management is not a new but an evolving idea that has been attracting attention of many organisations operating and/or owning some kind of infrastructure assets. The term asset management have been used widely with fundamental differences in interpretation and usage. Regardless of the context of the usage of the term, asset management implies the process of optimising return by scrutinising performance and making key strategic decisions throughout all phases of an assets lifecycle (Sarfi and Tao, 2004). Hence, asset management is a philosophy and discipline through which organisations are enabled to more effectively deploy their resources to provide higher levels of customer service and reliability while balancing financial objectives. In Australia, asset management made its way into the public works in 1993 when the Australian Accounting Standard Board issued the Australian Accounting Standard 27 – AAS27. Standard AAS27 required government agencies to capitalise and depreciate assets rather than expense them against earnings. This development has indirectly forced organisations managing infrastructure assets to consider the useful life and cost effectiveness of asset investments. The Australian State Treasuries and the Australian National Audit Office was the first organisation to formalise the concepts and principles of asset management in Australia in which they defined asset management as “ a systematic, structured process covering the whole life of an asset”(Australian National Audit Office, 1996). This initiative led other Government bodies and industry sectors to develop, refine and apply the concept of asset management in the management of their respective infrastructure assets. Hence, it can be argued that the concept of asset management has emerged as a separate and recognised field of management during the late 1990s. In comparison to other disciplines such as construction, facilities, maintenance, project management, economics, finance, to name a few, asset management is a relatively new discipline and is clearly a contemporary topic. The primary contributors to the literature in asset management are largely government organisations and industry practitioners. These contributions take the form of guidelines and reports on the best practice of asset management. More recently, some of these best practices have been made to become a standard such as the PAS 55 (IAM, 2004, IAM, 2008b) in UK. As such, current literature in this field tends to lack well-grounded theories. To-date, while receiving relatively more interest and attention from empirical researchers, the advancement of this field, particularly in terms of the volume of academic and theoretical development is at best moderate. A plausible reason for the lack of advancement is that many researchers and practitioners are still unaware of, or unimpressed by, the contribution that asset management can make to the performance of infrastructure asset. This paper seeks to explore the practices of organisations that manage infrastructure assets to develop a framework of strategic infrastructure asset management processes. It will begin by examining the development of asset management. This is followed by the discussion on the method to be adopted for this paper. Next, is the discussion of the result form case studies. It first describes the goals of infrastructure asset management and how they can support the broader business goals. Following this, a set of core processes that can support the achievement of business goals are provided. These core processes are synthesised based on the practices of asset managers in the case study organisations.
Resumo:
Traffic control at road junctions is one of the major concerns in most metropolitan cities. Controllers of various approaches are available and the required control action is the effective green-time assigned to each traffic stream within a traffic-light cycle. The application of fuzzy logic provides the controller with the capability to handle uncertain natures of the system, such as drivers’ behaviour and random arrivals of vehicles. When turning traffic is allowed at the junction, the number of phases in the traffic-light cycle increases. The additional input variables inevitably complicate the controller and hence slow down the decision-making process, which is critical in this real-time control problem. In this paper, a hierarchical fuzzy logic controller is proposed to tackle this traffic control problem at a 2-way road junction with turning traffic. The two levels of fuzzy logic controllers devise the minimum effective green-time and fine-tune it respectively at each phase of a traffic-light cycle. The complexity of the controller at each level is reduced with smaller rule-set. The performance of this hierarchical controller is examined by comparison with a fixed-time controller under various traffic conditions. Substantial delay reduction has been achieved as a result and the performance and limitation of the controller will be discussed.
Resumo:
The rock pools, salt pans, cliffs and bluffs, and the banks of the Coorooman and Pumpkin Creeks within Darumbal and Woppaburra Country are used as a backdrop in this paper, which offers an exploration of one woman’s quest to undertake her PhD and develop as an Indigenous scholar. The paper describes this Country and the use of Country to nourish, develop, stimulate and support the intellect. It draws on Australian and international literature to demonstrate the intellectual growth and development of Indigenous scholars. The paper offers a highly personal narrative of intellectual journeying which shows how we can be agents of change and power in our individual lives, even while power is being exercised over us and we are being oppressed and marginalised as Indigenous peoples.
Resumo:
Neo-liberalism has become one of the boom concepts of our time. From its original reference point as a descriptor of the economics of the “Chicago School” such as Milton Friedman, or authors such as Friedrich von Hayek, neo-liberalism has become an all-purpose descriptor and explanatory device for phenomena as diverse as Bollywood weddings, standardized testing in schools, violence in Australian cinema, and the digitization of content in public libraries. Moreover, it has become an entirely pejorative term: no-one refers to their own views as “neo-liberal”, but it rather refers to the erroneous views held by others, whether they acknowledge this or not. Neo-liberalism as it has come to be used, then, bears many of the hallmarks of a dominant ideology theory in the classical Marxist sense, even if it is often not explored in these terms. This presentation will take the opportunity provided by the English language publication of Michel Foucault’s 1978-79 lectures, under the title of The Birth of Biopolitics, to consider how he used the term neo-liberalism, and how this equates with its current uses in critical social and cultural theory. It will be argued that Foucault did not understand neo-liberalism as a dominant ideology in these lectures, but rather as marking a point of inflection in the historical evolution of liberal political philosophies of government. It will also be argued that his interpretation of neo-liberalism was more nuanced and more comparative than the more recent uses of Foucault in the literature on neo-liberalism. It will also look at how Foucault develops comparative historical models of liberal capitalism in The Birth of Biopolitics, arguing that this dimension of his work has been lost in more recent interpretations, which tend to retro-fit Foucault to contemporary critiques of either U.S. neo-conservatism or the “Third Way” of Tony Blair’s New Labour in the UK.
Resumo:
Traffic control at a road junction by a complex fuzzy logic controller is investigated. The increase in the complexity of junction means more number of input variables must be taken into account, which will increase the number of fuzzy rules in the system. A hierarchical fuzzy logic controller is introduced to reduce the number of rules. Besides, the increase in the complexity of the controller makes formulation of the fuzzy rules difficult. A genetic algorithm based off-line leaning algorithm is employed to generate the fuzzy rules. The learning algorithm uses constant flow-rates as training sets. The system is tested by both constant and time-varying flow-rates. Simulation results show that the proposed controller produces lower average delay than a fixed-time controller does under various traffic conditions.
Resumo:
From the business viewpoint, the railway timetable is a list of the products presented by the railway transportation operators to the customers, specifying the schedules of all the train services on a railway line or network. In order to evaluate the quality of the train service schedules, a number of indices are proposed in this paper. These indices primarily take the passengers’ needs, such as waiting time, transfer time and transport capacity into consideration. Delay rate is usually used in post-evaluation. In this study, we propose to give an evaluation on the probability that the scheduled train services are likely to be delayed and the recovery ability of the timetable after delay has occurred. The evaluation identifies the possible problems in the services, such as excessive waiting time, non-seamless transfer, and high possibility of delay. This paper also discusses the improvement of these problems through certain adjustments on the timetable. The indices for evaluation and the adjustment method on timetable are then applied to a case study on the Hu-Ning-Hang railway in China, followed by the discussions of the merits of the proposed indices for timetable evaluation and possible improvement.
Resumo:
The demand for high quality rail services in the twenty-first century has put an ever increasing demand on all rail operators. In order to meet the expectation of their patrons, the maintenance regime of railway systems has to be tightened up, the track conditions have to be well looked after, the rolling stock must be designed to withstand heavy duty. In short, in an ideal world where resources are unlimited, one needs to implement a very rigorous inspection regime in order to take care of the modem needs of a railway system [1]. If cost were not an issue, the maintenance engineers could inspect the train body by the most up-to-date techniques such as ultra-sound examination, x-ray inspection, magnetic particle inspection, etc. on a regular basis. However it is inconceivable to have such a perfect maintenance regime in any commercial railway. Likewise, it is impossible to have a perfect rolling stock which can weather all the heavy duties experienced in a modem railway. Hence it is essential that some condition monitoring schemes are devised to pick up potential defects which could manifest into safety hazards. This paper introduces an innovative condition monitoring system for track profile and, together with an instrumented car to carry out surveillance of the track, will provide a comprehensive railway condition monitoring system which is free from the usual difficulty of electromagnetic compatibility issues in a typical railway environment
Resumo:
This session is titled TRANSFORM! Opportunities and Challenges of Digital Content for Creative Economy. Some of the key concepts for this session include: 1. City / Economy 2. Creativity 3. Digital content 4. Transformation All of us would agree that these terms describe pertinent characteristics of contemporary world, the epithet of which is the ‘network era.’ I was thinking about what I would like to discuss here and what you, leading experts in divergent fields, would be interested to hear about. As the keynote for this session and as one of the first speakers for the entire conference, I see my role as an initiator for imagination, the wilder the better, posing questions rather than answers. Also given the session title Transform!, I wish to change this slightly to Transforming People, Place, and Technology: Towards Re-creative City in an attempt to take us away a little from the usual image depicted by the given topic. Instead, I intend to sketch a more holistic picture by reflecting on and extrapolating the four key concepts from the urban informatics point of view. To do so, I use ‘city’ as the primary guiding concept for my talk rather than probably more expected ‘digital media’ or ‘creative economy.’ You may wonder what I mean by re-creative city. I will explain this in time by looking at the key concepts from these four respective angles: 1. Living city 2. Creative city 3. Re-‐creative city 4. Opportunities and Challenges to arrive at a speculative yet probable image of the city that we may aspire to transform our current cities into. First let us start by considering the ‘living city.’
Resumo:
Prolific British author/illustrator Anthony Browne both participates in the classic fairy-tale tradition and appropriates its cultural capital, ultimately undertaking a process of self-canonisation alongside the dissemination of fairy tales. In reading Browne’s Hansel and Gretel (1981), The Tunnel (1989) and Into the Forest (2004), a trajectory emerges that moves from broadly intertextual to more exclusively self-referential modes of representation which reward readers of “Anthony Browne”, rather than readers of “fairy tales”. All three books depict ‘babes in the woods’ stories wherein child characters must negotiate some form of threat outside the home in order to return home safely. Thus, they represent childhood agency. However, these visions of agency are ultimately subordinated to logics of capital, which means that child readers of Browne’s fairy-tale books are overtly invited to identify with children who act, but are interpellated as privileged if they ‘know’. Bourdieu’s model of ‘cultural capital’ offers a lens for considering Browne’s production of ‘value’ for his own works within a broader cultural landscape which privileges literary fairy tales as a register of juvenile cultural competency. If cultural capital can be formulated most simply as the symbolic exchange value of approved modes of knowing and being, it is clearly helpful when trying to unpack logics of meaning within heavily intertextual or citational texts. It is also helpful thinking about what kinds of stories we as a culture choose to disseminate, choose to privilege, or choose to suppress. Zipes notes of fairy tales that, “the genre itself becomes a kind of institute that is involved in the socialization and acculturation of readers” (22). He elaborates that, “We initiate readers and expect them to learn the fairy-tale code as part of our responsibility in the civilizing process” (Zipes 29), so it is little wonder that Tatar describes fairy tales as “a vital part of our cultural capital” (xix). Although Browne is clearly interested in literary fairy tales, the most obvious strategies of self-canonisation take place in Browne’s work not in words but in pictures: hidden in plain sight, as illustration becomes self-reflexive citation.
Resumo:
Bronfenbrenner.s Bioecological Model, expressed as the developmental equation, D f PPCT, is the theoretical framework for two studies that bring together diverse strands of psychology to study the work-life interface of working adults. Occupational and organizational psychology is focused on the demands and resources of work and family, without emphasising the individual in detail. Health and personality psychology examine the individual but without emphasis on the individual.s work and family roles. The current research used Bronfenbrenner.s theoretical framework to combine individual differences, work and family to understand how these factors influence the working adult.s psychological functioning. Competent development has been defined as high well-being (measured as life satisfaction and psychological well-being) and high work engagement (as work vigour, work dedication and absorption in work) and as the absence of mental illness (as depression, anxiety and stress) and the absence of burnout (as emotional exhaustion, cynicism and professional efficacy). Study 1 and 2 were linked, with Study 1 as a cross-sectional survey and Study 2, a prospective panel study that followed on from the data used in Study1. Participants were recruited from a university and from a large public hospital to take part in a 3-wave, online study where they completed identical surveys at 3-4 month intervals (N = 470 at Time 1 and N = 198 at Time 3). In Study 1, hierarchical multiple regressions were used to assess the effects of individual differences (Block 1, e.g. dispositional optimism, coping self-efficacy, perceived control of time, humour), work and family variables (Block 2, e.g. affective commitment, skill discretion, work hours, children, marital status, family demands) and the work-life interface (Block 3, e.g. direction and quality of spillover between roles, work-life balance) on the outcomes. There were a mosaic of predictors of the outcomes with a group of seven that were the most frequent significant predictors and which represented the individual (dispositional optimism and coping self-efficacy), the workplace (skill discretion, affective commitment and job autonomy) and the work-life interface (negative work-to-family spillover and negative family-to-work spillover). Interestingly, gender and working hours were not important predictors. The effects of job social support, generally and for work-life issues, perceived control of time and egalitarian gender roles on the outcomes were mediated by negative work-to-family spillover, particularly for emotional exhaustion. Further, the effect of negative spillover on depression, anxiety and work engagement was moderated by the individual.s personal and workplace resources. Study 2 modelled the longitudinal relationships between the group of the seven most frequent predictors and the outcomes. Using a set of non-nested models, the relative influences of concurrent functioning, stability and change over time were assessed. The modelling began with models at Time 1, which formed the basis for confirmatory factor analysis (CFA) to establish the underlying relationships between the variables and calculate the composite variables for the longitudinal models. The CFAs were well fitting with few modifications to ensure good fit. However, using burnout and work engagement together required additional analyses to resolve poor fit, with one factor (representing a continuum from burnout to work engagement) being the only acceptable solution. Five different longitudinal models were investigated as the Well-Being, Mental Distress, Well-Being-Mental Health, Work Engagement and Integrated models using differing combinations of the outcomes. The best fitting model for each was a reciprocal model that was trimmed of trivial paths. The strongest paths were the synchronous correlations and the paths within variables over time. The reciprocal paths were more variable with weak to mild effects. There was evidence of gain and loss spirals between the variables over time, with a slight net gain in resources that may provide the mechanism for the accumulation of psychological advantage over a lifetime. The longitudinal models also showed that there are leverage points at which personal, psychological and managerial interventions can be targeted to bolster the individual and provide supportive workplace conditions that also minimise negative spillover. Bronfenbrenner.s developmental equation has been a useful framework for the current research, showing the importance of the person as central to the individual.s experience of the work-life interface. By taking control of their own life, the individual can craft a life path that is most suited to their own needs. Competent developmental outcomes were most likely where the person was optimistic and had high self-efficacy, worked in a job that they were attached to and which allowed them to use their talents and without too much negative spillover between their work and family domains. In this way, individuals had greater well-being, better mental health and greater work engagement at any one time and across time.
Resumo:
This manuscript took a 'top down' approach to understanding survival of inhabitant cells in the ecosystem bone, working from higher to lower length and time scales through the hierarchical ecosystem of bone. Our working hypothesis is that nature “engineered” the skeleton using a 'bottom up' approach,where mechanical properties of cells emerge from their adaptation to their local me-chanical milieu. Cell aggregation and formation of higher order anisotropic struc- ture results in emergent architectures through cell differentiation and extracellular matrix secretion. These emergent properties, including mechanical properties and architecture, result in mechanical adaptation at length scales and longer time scales which are most relevant for the survival of the vertebrate organism [Knothe Tate and von Recum 2009]. We are currently using insights from this approach to har-ness nature’s regeneration potential and to engineer novel mechanoactive materials [Knothe Tate et al. 2007, Knothe Tate et al. 2009]. In addition to potential applications of these exciting insights, these studies may provide important clues to evolution and development of vertebrate animals. For instance, one might ask why mesenchymal stem cells condense at all? There is a putative advantage to self-assembly and cooperation, but this advantage is somewhat outweighed by the need for infrastructural complexity (e.g., circulatory systems comprised of specific differentiated cell types which in turn form conduits and pumps to overcome limitations of mass transport via diffusion, for example; dif-fusion is untenable for multicellular organisms larger than 250 microns in diameter. A better question might be: Why do cells build skeletal tissue? Once cooperatingcells in tissues begin to deplete local sources of food in their aquatic environment, those that have evolved a means to locomote likely have an evolutionary advantage. Once the environment becomes less aquarian and more terrestrial, self-assembled organisms with the ability to move on land might have conferred evolutionary ad-vantages as well. So did the cytoskeleton evolve several length scales, enabling the emergence of skeletal architecture for vertebrate animals? Did the evolutionary advantage of motility over noncompliant terrestrial substrates (walking on land) favor adaptations including emergence of intracellular architecture (changes in the cytoskeleton and upregulation of structural protein manufacture), inter-cellular con- densation, mineralization of tissues, and emergence of higher order architectures?How far does evolutionary Darwinism extend and how can we exploit this knowl- edge to engineer smart materials and architectures on Earth and new, exploratory environments?[Knothe Tate et al. 2008]. We are limited only by our ability to imagine. Ultimately, we aim to understand nature, mimic nature, guide nature and/or exploit nature’s engineering paradigms without engineer-ing ourselves out of existence.
Resumo:
Purpose: The aim of this study was to determine current approaches adopted by optometrists to the recording of corneal staining following fluorescein instillation. Methods: An anonymous ‘record-keeping task’ was sent to all 756 practitioners who are members of the Queensland Division of Optometrists Association Australia. This task comprised a form on which appeared a colour photograph depicting contact lens solution-induced corneal staining. Next to the photograph was an empty box, in which practitioners were asked to record their observations. Practitioners were also asked to indicate the level of severity of the condition at which treatment would be instigated. Results: Completed task forms were returned by 228 optometrists, representing a 30 per cent response rate. Ninety-two per cent of respondents offered a diagnosis. The most commonly used descriptive terms were ‘superficial punctate keratitis’ (36 per cent of respondents) and ‘punctate staining’ (29 per cent). The level of severity and location of corneal staining were noted by 69 and 68 per cent of respondents, respectively. A numerical grade was assigned by 44 per cent of respondents. Only three per cent nominated the grading scale used. The standard deviation of assigned grades was � 0.6. The condition was sketched by 35 per cent of respondents and two per cent stated that they would take a photograph of the eye. Ten per cent noted the eye in which the condition was being observed. Opinions of the level of severity at which treatment for corneal staining should be instigated varied considerably between practitioners, ranging from ‘any sign of corneal staining’ to ‘grade 4 staining’. Conclusion: Although most practitioners made a sensible note of the condition and properly recorded the location of corneal staining, serious deficiencies were evident regarding other aspects of record-keeping. Ongoing programs of professional optometric education should reinforce good practice in relation to clinical record-keeping.