539 resultados para G-extremal processes
Resumo:
Ubiquitination involves the attachment of ubiquitin (Ub) to lysine residues on substrate proteins or itself, which can result in protein monoubiquitination or polyubiquitination. Polyubiquitination through different lysines (seven) or the N-terminus of Ub can generate different protein-Ub structures. These include monoubiquitinated proteins, polyubiqutinated proteins with homotypic chains through a particular lysine on Ub or mixed polyubiquitin chains generated by polymerization through different Ub lysines. The ability of the ubiquitination pathway to generate different protein-Ub structures provides versatility of this pathway to target proteins to different fates. Protein ubiquitination is catalyzed by Ub-conjugating and Ub-ligase enzymes, with different combinations of these enzymes specifying the type of Ub modification on protein substrates. How Ub-conjugating and Ub-ligase enzymes generate this structural diversity is not clearly understood. In the current review, we discuss mechanisms utilized by the Ub-conjugating and Ub-ligase enzymes to generate structural diversity during protein ubiquitination, with a focus on recent mechanistic insights into protein monoubiquitination and polyubiquitination.
Resumo:
In 1991, McNabb introduced the concept of mean action time (MAT) as a finite measure of the time required for a diffusive process to effectively reach steady state. Although this concept was initially adopted by others within the Australian and New Zealand applied mathematics community, it appears to have had little use outside this region until very recently, when in 2010 Berezhkovskii and coworkers rediscovered the concept of MAT in their study of morphogen gradient formation. All previous work in this area has been limited to studying single–species differential equations, such as the linear advection–diffusion–reaction equation. Here we generalise the concept of MAT by showing how the theory can be applied to coupled linear processes. We begin by studying coupled ordinary differential equations and extend our approach to coupled partial differential equations. Our new results have broad applications including the analysis of models describing coupled chemical decay and cell differentiation processes, amongst others.
Resumo:
The development of the capacity for self-regulation represents an important achievement of childhood and is associated with social, behavioral, and academic competence (Bronson, 2001; Cleary & Zimmerman, 2004). Self-regulation evolves as individuals mature, with its final form integrating emotional, cognitive, and behavioral elements working together to achieve self-selected goals. This evolution is closely intertwined with the innate press to master the environment, labeled mastery motivation (Morgan, Harmon, & Maslin-Cole, 1990), as competence is the aim that underpins mastery motivation.
Resumo:
Lignocellulosic materials including agricultural, municipal and forestry residues, and dedicated bioenergy crops offer significant potential as a renewable feedstock for the production of fuels and chemicals. These products can be chemically or functionally equivalent to existing products that are produced from fossil-based feedstocks. To unlock the potential of lignocellulosic materials, it is necessary to pretreat or fractionate the biomass to make it amenable to downstream processing. This chapter explores current and developing technologies for the pretreatment and fractionation of lignocellulosic biomass for the production of chemicals and fuels.
Resumo:
‘Social innovation’ is a construct increasingly used to explain the practices, processes and actors through which sustained positive transformation occurs in the network society (Mulgan, G., Tucker, S., Ali, R., Sander, B. (2007). Social innovation: What it is, why it matters and how can it be accelerated. Oxford:Skoll Centre for Social Entrepreneurship; Phills, J. A., Deiglmeier, K., & Miller, D. T. Stanford Social Innovation Review, 6(4):34–43, 2008.). Social innovation has been defined as a “novel solution to a social problem that is more effective, efficient, sustainable, or just than existing solutions, and for which the value created accrues primarily to society as a whole rather than private individuals.” (Phills,J. A., Deiglmeier, K., & Miller, D. T. Stanford Social Innovation Review, 6 (4):34–43, 2008: 34.) Emergent ideas of social innovation challenge some traditional understandings of the nature and role of the Third Sector, as well as shining a light on those enterprises within the social economy that configure resources in novel ways. In this context, social enterprises – which provide a social or community benefit and trade to fulfil their mission – have attracted considerable policy attention as one source of social innovation within a wider field of action (see Leadbeater, C. (2007). ‘Social enterprise and social innovation: Strategies for the next 10 years’, Cabinet office,Office of the third sector http://www.charlesleadbeater.net/cms xstandard/social_enterprise_innovation.pdf. Last accessed 19/5/2011.). And yet, while social enterprise seems to have gained some symbolic traction in society, there is to date relatively limited evidence of its real world impacts.(Dart, R. Not for Profit Management and Leadership, 14(4):411–424, 2004.) In other words, we do not know much about the social innovation capabilities and effects of social enterprise. In this chapter, we consider the social innovation practices of social enterprise, drawing on Mulgan, G., Tucker, S., Ali, R., Sander, B. (2007). Social innovation: What it is, why it matters and how can it be accelerated. Oxford: Skoll Centre for Social Entrepreneurship: 5) three dimensions of social innovation: new combinations or hybrids of existing elements; cutting across organisational, sectoral and disciplinary boundaries; and leaving behind compelling new relationships. Based on a detailed survey of 365 Australian social enterprises, we examine their self-reported business and mission-related innovations, the ways in which they configure and access resources and the practices through which they diffuse innovation in support of their mission. We then consider how these findings inform our understanding of the social innovation capabilities and effects of social enterprise,and their implications for public policy development.
Resumo:
Australian queer (GLBTIQ) university student activist media is an important site of self-representation. Community media is a significant site for the development of queer identity, community and a key part of queer politics. This paper reviews my research into queer student media, which is grounded in a queer theoretical perspective. Rob Cover argues that queer theoretical approaches that study media products fail to consider the material contexts that contribute to their construction. I use an ethnographic approach to examine how editors construct queer identity and community in queer student media. My research contributes to queer media scholarship by addressing the gap that Cover identifies, and to the rich scholarship on negotiations of queer community.
Resumo:
As indicated in a previous Teaching Science article, effective planning for curricula integration requires using standards from two (or more) subject areas (e.g., science and English, science and art or science and mathematics), which also becomes the assessment foci for teaching and learning. Curricula integration of standards into an activity necessitates pedagogical knowledge for developing students’ learning in both subject areas. For science education, the skills and tools for curricula integration include the use of other key learning areas (KLAs). A balance between teacher and student-centred science education programs that draw on democratic processes (e.g., Beane, 1997) can be used to make real-world links to target students’ individual needs. This article presents practical ways to commence thinking about curricula integration towards using Australian curriculum standards.
Resumo:
Readily accepted knowledge regarding crash causation is consistently omitted from efforts to model and subsequently understand motor vehicle crash occurrence and their contributing factors. For instance, distracted and impaired driving accounts for a significant proportion of crash occurrence, yet is rarely modeled explicitly. In addition, spatially allocated influences such as local law enforcement efforts, proximity to bars and schools, and roadside chronic distractions (advertising, pedestrians, etc.) play a role in contributing to crash occurrence and yet are routinely absent from crash models. By and large, these well-established omitted effects are simply assumed to contribute to model error, with predominant focus on modeling the engineering and operational effects of transportation facilities (e.g. AADT, number of lanes, speed limits, width of lanes, etc.) The typical analytical approach—with a variety of statistical enhancements—has been to model crashes that occur at system locations as negative binomial (NB) distributed events that arise from a singular, underlying crash generating process. These models and their statistical kin dominate the literature; however, it is argued in this paper that these models fail to capture the underlying complexity of motor vehicle crash causes, and thus thwart deeper insights regarding crash causation and prevention. This paper first describes hypothetical scenarios that collectively illustrate why current models mislead highway safety researchers and engineers. It is argued that current model shortcomings are significant, and will lead to poor decision-making. Exploiting our current state of knowledge of crash causation, crash counts are postulated to arise from three processes: observed network features, unobserved spatial effects, and ‘apparent’ random influences that reflect largely behavioral influences of drivers. It is argued; furthermore, that these three processes in theory can be modeled separately to gain deeper insight into crash causes, and that the model represents a more realistic depiction of reality than the state of practice NB regression. An admittedly imperfect empirical model that mixes three independent crash occurrence processes is shown to outperform the classical NB model. The questioning of current modeling assumptions and implications of the latent mixture model to current practice are the most important contributions of this paper, with an initial but rather vulnerable attempt to model the latent mixtures as a secondary contribution.
Resumo:
The state of the practice in safety has advanced rapidly in recent years with the emergence of new tools and processes for improving selection of the most cost-effective safety countermeasures. However, many challenges prevent fair and objective comparisons of countermeasures applied across safety disciplines (e.g. engineering, emergency services, and behavioral measures). These countermeasures operate at different spatial scales, are funded often by different financial sources and agencies, and have associated costs and benefits that are difficult to estimate. This research proposes a methodology by which both behavioral and engineering safety investments are considered and compared in a specific local context. The methodology involves a multi-stage process that enables the analyst to select countermeasures that yield high benefits to costs, are targeted for a particular project, and that may involve costs and benefits that accrue over varying spatial and temporal scales. The methodology is illustrated using a case study from the Geary Boulevard Corridor in San Francisco, California. The case study illustrates that: 1) The methodology enables the identification and assessment of a wide range of safety investment types at the project level; 2) The nature of crash histories lend themselves to the selection of both behavioral and engineering investments, requiring cooperation across agencies; and 3) The results of the cost-benefit analysis are highly sensitive to cost and benefit assumptions, and thus listing and justification of all assumptions is required. It is recommended that a sensitivity analyses be conducted when there is large uncertainty surrounding cost and benefit assumptions.
Resumo:
This case-study explores alternative and experimental methods of research data acquisition, through an emerging research methodology, ‘Guerrilla Research Tactics’ [GRT]. The premise is that the researcher develops covert tactics for attracting and engaging with research participants. These methods range between simple analogue interventions to physical bespoke artefacts which contain an embedded digital link to a live, interactive data collecting resource, such as an online poll, survey or similar. These artefacts are purposefully placed in environments where the researcher anticipates an encounter and response from the potential research participant. The choice of design and placement of artefacts is specific and intentional. DESCRIPTION: Additional information may include: the outcomes; key factors or principles that contribute to its effectiveness; anticipated impact/evidence of impact. This case-study assesses the application of ‘Guerrilla Research Tactics’ [GRT] Methodology as an alternative, engaging and interactive method of data acquisition for higher degree research. Extending Gauntlett’s definition of ‘new creative methods… an alternative to language driven qualitative research methods' (2007), this case-study contributes to the existing body of literature addressing creative and interactive approaches to HDR data collection. The case-study was undertaken with Masters of Architecture and Urban Design research students at QUT, in 2012. Typically students within these creative disciplines view research as a taxing and boring process, distracting them from their studio design focus. An obstacle that many students face, is acquiring data from their intended participant groups. In response to these challenges the authors worked with students to develop creative, fun, and engaging research methods for both the students and their research participants. GRT are influenced by and developed from a combination of participatory action research (Kindon, 2008) and unobtrusive research methods (Kellehear, 1993), to enhance social research. GRT takes un-obtrusive research in a new direction, beyond the typical social research methods. The Masters research students developed alternative methods for acquiring data, which relied on a combination of analogue design interventions and online platforms commonly distributed through social networks. They identified critical issues that required action by the community, and the processes they developed focused on engaging with communities, to propose solutions. Key characteristics shared between both GRT and Guerrilla Activism, are notions of political issues, the unexpected, the unconventional, and being interactive, unique and thought provoking. The trend of Guerrilla Activism has been adapted to: marketing, communication, gardening, craftivism, theatre, poetry, and art. Focusing on the action element and examining elements of current trends within Guerrilla marketing, we believe that GRT can be applied to a range of research areas within various academic disciplines.
Resumo:
Reliable communications is one of the major concerns in wireless sensor networks (WSNs). Multipath routing is an effective way to improve communication reliability in WSNs. However, most of existing multipath routing protocols for sensor networks are reactive and require dynamic route discovery. If there are many sensor nodes from a source to a destination, the route discovery process will create a long end-to-end transmission delay, which causes difficulties in some time-critical applications. To overcome this difficulty, the efficient route update and maintenance processes are proposed in this paper. It aims to limit the amount of routing overhead with two-tier routing architecture and introduce the combination of piggyback and trigger update to replace the periodic update process, which is the main source of unnecessary routing overhead. Simulations are carried out to demonstrate the effectiveness of the proposed processes in improvement of total amount of routing overhead over existing popular routing protocols.
Resumo:
Pillar of salt: (3 hand-applied silver gelatin photographs) Statement: For women moving into new experiences and spaces, loss and hardship is often a price to be paid. These courageous women look back to things they have overcome in order to continue to grow.
Resumo:
A pressing cost issue facing construction is the procurement of off-site pre-manufactured assemblies. In order to encourage Australian adoption of off-site manufacture (OSM), a new approach to underlying processes is required. The advent of object oriented digital models for construction design assumes intelligent use of data. However, the construction production system relies on traditional methods and data sources and is expected to benefit from the application of well-established business process management techniques. The integration of the old and new data sources allows for the development of business process models which, by capturing typical construction processes involving OSM, provides insights into such processes. This integrative approach is the foundation of research into the use of OSM to increase construction productivity in Australia. The purpose of this study is to develop business process models capturing the procurement, resources and information flow of construction projects. For each stage of the construction value chain, a number of sub-processes are identified. Business Process Modelling Notation (BPMN), a mainstream business process modelling standard, is used to create base-line generic construction process models. These models identify OSM decision-making points that could provide cost reductions in procurement workflow and management systems. This paper reports on phase one of an on-going research aiming to develop a proto-type workflow application that can provide semi-automated support to construction processes involving OSM and assist in decision-making in the adoption of OSM thus contributing to a sustainable built environment.
Resumo:
In this paper we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one pre-processes the source image and template/model with a bank of filters (e.g. oriented edges, Gabor, etc.) as: (i) it can handle substantial illumination variations, (ii) the inefficient pre-processing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, (iii) unlike traditional LK the computational cost is invariant to the number of filters and as a result far more efficient, and (iv) this approach can be extended to the inverse compositional form of the LK algorithm where nearly all steps (including Fourier transform and filter bank pre-processing) can be pre-computed leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to non-rigid object alignment tasks that are considered extensions of the LK algorithm such as those found in Active Appearance Models (AAMs).