915 resultados para drives


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computer resource allocation represents a significant challenge particularly for multiprocessor systems, which consist of shared computing resources to be allocated among co-runner processes and threads. While an efficient resource allocation would result in a highly efficient and stable overall multiprocessor system and individual thread performance, ineffective poor resource allocation causes significant performance bottlenecks even for the system with high computing resources. This thesis proposes a cache aware adaptive closed loop scheduling framework as an efficient resource allocation strategy for the highly dynamic resource management problem, which requires instant estimation of highly uncertain and unpredictable resource patterns. Many different approaches to this highly dynamic resource allocation problem have been developed but neither the dynamic nature nor the time-varying and uncertain characteristics of the resource allocation problem is well considered. These approaches facilitate either static and dynamic optimization methods or advanced scheduling algorithms such as the Proportional Fair (PFair) scheduling algorithm. Some of these approaches, which consider the dynamic nature of multiprocessor systems, apply only a basic closed loop system; hence, they fail to take the time-varying and uncertainty of the system into account. Therefore, further research into the multiprocessor resource allocation is required. Our closed loop cache aware adaptive scheduling framework takes the resource availability and the resource usage patterns into account by measuring time-varying factors such as cache miss counts, stalls and instruction counts. More specifically, the cache usage pattern of the thread is identified using QR recursive least square algorithm (RLS) and cache miss count time series statistics. For the identified cache resource dynamics, our closed loop cache aware adaptive scheduling framework enforces instruction fairness for the threads. Fairness in the context of our research project is defined as a resource allocation equity, which reduces corunner thread dependence in a shared resource environment. In this way, instruction count degradation due to shared cache resource conflicts is overcome. In this respect, our closed loop cache aware adaptive scheduling framework contributes to the research field in two major and three minor aspects. The two major contributions lead to the cache aware scheduling system. The first major contribution is the development of the execution fairness algorithm, which degrades the co-runner cache impact on the thread performance. The second contribution is the development of relevant mathematical models, such as thread execution pattern and cache access pattern models, which in fact formulate the execution fairness algorithm in terms of mathematical quantities. Following the development of the cache aware scheduling system, our adaptive self-tuning control framework is constructed to add an adaptive closed loop aspect to the cache aware scheduling system. This control framework in fact consists of two main components: the parameter estimator, and the controller design module. The first minor contribution is the development of the parameter estimators; the QR Recursive Least Square(RLS) algorithm is applied into our closed loop cache aware adaptive scheduling framework to estimate highly uncertain and time-varying cache resource patterns of threads. The second minor contribution is the designing of a controller design module; the algebraic controller design algorithm, Pole Placement, is utilized to design the relevant controller, which is able to provide desired timevarying control action. The adaptive self-tuning control framework and cache aware scheduling system in fact constitute our final framework, closed loop cache aware adaptive scheduling framework. The third minor contribution is to validate this cache aware adaptive closed loop scheduling framework efficiency in overwhelming the co-runner cache dependency. The timeseries statistical counters are developed for M-Sim Multi-Core Simulator; and the theoretical findings and mathematical formulations are applied as MATLAB m-file software codes. In this way, the overall framework is tested and experiment outcomes are analyzed. According to our experiment outcomes, it is concluded that our closed loop cache aware adaptive scheduling framework successfully drives co-runner cache dependent thread instruction count to co-runner independent instruction count with an error margin up to 25% in case cache is highly utilized. In addition, thread cache access pattern is also estimated with 75% accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper introduces our research on influencing the experience of people in urban public places through mobile mediated interactions. Information and communication technology (ICT) devices are sometimes used to create personal space while in public. ICT devices could also be utilised to digitally augment the urban space with non-privacy sensitive data enabling mobile mediated interactions in an anonymous way between collocated strangers. We present what motivates the research on digital augmentations and mobile mediated interactions between unknown urban dwellers, define the research problem that drives this study and why it is significant research in the field of pervasive social networking. The paper illustrates three design interventions enabling social pervasive content sharing and employing pervasive presence, awareness and anonymous social user interaction in urban public places. The paper concludes with an outlook and summarises the research effort.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Proteases regulate a spectrum of diverse physiological processes, and dysregulation of proteolytic activity drives a plethora of pathological conditions. Understanding protease function is essential to appreciating many aspects of normal physiology and progression of disease. Consequently, development of potent and specific inhibitors of proteolytic enzymes is vital to provide tools for the dissection of protease function in biological systems and for the treatment of diseases linked to aberrant proteolytic activity. The studies in this thesis describe the rational design of potent inhibitors of three proteases that are implicated in disease development. Additionally, key features of the interaction of proteases and their cognate inhibitors or substrates are analysed and a series of rational inhibitor design principles are expounded and tested. Rational design of protease inhibitors relies on a comprehensive understanding of protease structure and biochemistry. Analysis of known protease cleavage sites in proteins and peptides is a commonly used source of such information. However, model peptide substrate and protein sequences have widely differing levels of backbone constraint and hence can adopt highly divergent structures when binding to a protease’s active site. This may result in identical sequences in peptides and proteins having different conformations and diverse spatial distribution of amino acid functionalities. Regardless of this, protein and peptide cleavage sites are often regarded as being equivalent. One of the key findings in the following studies is a definitive demonstration of the lack of equivalence between these two classes of substrate and invalidation of the common practice of using the sequences of model peptide substrates to predict cleavage of proteins in vivo. Another important feature for protease substrate recognition is subsite cooperativity. This type of cooperativity is commonly referred to as protease or substrate binding subsite cooperativity and is distinct from allosteric cooperativity, where binding of a molecule distant from the protease active site affects the binding affinity of a substrate. Subsite cooperativity may be intramolecular where neighbouring residues in substrates are interacting, affecting the scissile bond’s susceptibility to protease cleavage. Subsite cooperativity can also be intermolecular where a particular residue’s contribution to binding affinity changes depending on the identity of neighbouring amino acids. Although numerous studies have identified subsite cooperativity effects, these findings are frequently ignored in investigations probing subsite selectivity by screening against diverse combinatorial libraries of peptides (positional scanning synthetic combinatorial library; PS-SCL). This strategy for determining cleavage specificity relies on the averaged rates of hydrolysis for an uncharacterised ensemble of peptide sequences, as opposed to the defined rate of hydrolysis of a known specific substrate. Further, since PS-SCL screens probe the preference of the various protease subsites independently, this method is inherently unable to detect subsite cooperativity. However, mean hydrolysis rates from PS-SCL screens are often interpreted as being comparable to those produced by single peptide cleavages. Before this study no large systematic evaluation had been made to determine the level of correlation between protease selectivity as predicted by screening against a library of combinatorial peptides and cleavage of individual peptides. This subject is specifically explored in the studies described here. In order to establish whether PS-SCL screens could accurately determine the substrate preferences of proteases, a systematic comparison of data from PS-SCLs with libraries containing individually synthesised peptides (sparse matrix library; SML) was carried out. These SML libraries were designed to include all possible sequence combinations of the residues that were suggested to be preferred by a protease using the PS-SCL method. SML screening against the three serine proteases kallikrein 4 (KLK4), kallikrein 14 (KLK14) and plasmin revealed highly preferred peptide substrates that could not have been deduced by PS-SCL screening alone. Comparing protease subsite preference profiles from screens of the two types of peptide libraries showed that the most preferred substrates were not detected by PS SCL screening as a consequence of intermolecular cooperativity being negated by the very nature of PS SCL screening. Sequences that are highly favoured as result of intermolecular cooperativity achieve optimal protease subsite occupancy, and thereby interact with very specific determinants of the protease. Identifying these substrate sequences is important since they may be used to produce potent and selective inhibitors of protolytic enzymes. This study found that highly favoured substrate sequences that relied on intermolecular cooperativity allowed for the production of potent inhibitors of KLK4, KLK14 and plasmin. Peptide aldehydes based on preferred plasmin sequences produced high affinity transition state analogue inhibitors for this protease. The most potent of these maintained specificity over plasma kallikrein (known to have a very similar substrate preference to plasmin). Furthermore, the efficiency of this inhibitor in blocking fibrinolysis in vitro was comparable to aprotinin, which previously saw clinical use to reduce perioperative bleeding. One substrate sequence particularly favoured by KLK4 was substituted into the 14 amino acid, circular sunflower trypsin inhibitor (SFTI). This resulted in a highly potent and selective inhibitor (SFTI-FCQR) which attenuated protease activated receptor signalling by KLK4 in vitro. Moreover, SFTI-FCQR and paclitaxel synergistically reduced growth of ovarian cancer cells in vitro, making this inhibitor a lead compound for further therapeutic development. Similar incorporation of a preferred KLK14 amino acid sequence into the SFTI scaffold produced a potent inhibitor for this protease. However, the conformationally constrained SFTI backbone enforced a different intramolecular cooperativity, which masked a KLK14 specific determinant. As a consequence, the level of selectivity achievable was lower than that found for the KLK4 inhibitor. Standard mechanism inhibitors such as SFTI rely on a stable acyl-enzyme intermediate for high affinity binding. This is achieved by a conformationally constrained canonical binding loop that allows for reformation of the scissile peptide bond after cleavage. Amino acid substitutions within the inhibitor to target a particular protease may compromise structural determinants that support the rigidity of the binding loop and thereby prevent the engineered inhibitor reaching its full potential. An in silico analysis was carried out to examine the potential for further improvements to the potency and selectivity of the SFTI-based KLK4 and KLK14 inhibitors. Molecular dynamics simulations suggested that the substitutions within SFTI required to target KLK4 and KLK14 had compromised the intramolecular hydrogen bond network of the inhibitor and caused a concomitant loss of binding loop stability. Furthermore in silico amino acid substitution revealed a consistent correlation between a higher frequency of formation and the number of internal hydrogen bonds of SFTI-variants and lower inhibition constants. These predictions allowed for the production of second generation inhibitors with enhanced binding affinity toward both targets and highlight the importance of considering intramolecular cooperativity effects when engineering proteins or circular peptides to target proteases. The findings from this study show that although PS-SCLs are a useful tool for high throughput screening of approximate protease preference, later refinement by SML screening is needed to reveal optimal subsite occupancy due to cooperativity in substrate recognition. This investigation has also demonstrated the importance of maintaining structural determinants of backbone constraint and conformation when engineering standard mechanism inhibitors for new targets. Combined these results show that backbone conformation and amino acid cooperativity have more prominent roles than previously appreciated in determining substrate/inhibitor specificity and binding affinity. The three key inhibitors designed during this investigation are now being developed as lead compounds for cancer chemotherapy, control of fibrinolysis and cosmeceutical applications. These compounds form the basis of a portfolio of intellectual property which will be further developed in the coming years.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The report card for the introductory programming unit at our university has historically been unremarkable in terms of attendance rates, student success rates and student retention in both the unit and the degree course. After a course restructure recently involving a fresh approach to introducing programming, we reported a high retention in the unit, with consistently high attendance and a very low failure rate. Following those encouraging results, we collected student attendance data for several semesters and compared attendance rates to student results. We have found that interesting workshop material which directly relates to course-relevant assessment items and therefore drives the learning, in an engaging collaborative learning environment has improved attendance to an extraordinary extent, with student failure rates plummeting to the lowest in recorded history at our university.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent years have seen a rapid increase in SMEs working collaboratively in inter-organizational projects. But what drives the emergence of such projects, and what types of industries breed them the most? To address these questions, this paper extends the long running literature on the firm and industry antecedents of new venturing and alliance formation to the domain of project-based organization by SMEs. Based on survey data collected among 1,725 small and medium sized organizations and longitudinal industry data, we find an overall pattern that indicates that IOPV participation is primarily determined by a focal SME’s scope of innovative activities, and the munificence, dynamism and complexity of its environment. Unexpectedly, these variables have different effects on whether SMEs are likely to engage in IOPVs, compared to with how many there are in their portfolio at a time. Implications for theory development are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This chapter proposes a conceptual model for optimal development of needed capabilities for the contemporary knowledge economy. We commence by outlining key capability requirements of the 21st century knowledge economy, distinguishing these from those suited to the earlier stages of the knowledge economy. We then discuss the extent to which higher education currently caters to these requirements and then put forward a new model for effective knowledge economy capability learning. The core of this model is the development of an adaptive and adaptable career identity, which is created through a reflective process of career self-management, drawing upon data from the self and the world of work. In turn, career identity drives the individual’s process of skill and knowledge acquisition, including deep disciplinary knowledge. The professional capability learning thus acquired includes disciplinary skill and knowledge sets, generic skills, and also skills for the knowledge economy, including disciplinary agility, social network capability, and enterprise skills. In the final part of this chapter, we envision higher education systems that embrace the model, and suggest steps that could be taken toward making the development of knowledge economy capabilities an integral part of the university experience.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates the role of cultural factors as possible partial explanation of the disparity in terms of project management deployment observed between various studied countries. The topic of culture has received increasing attention in the management literature in general during the last decades and in the project management literature in particular during the last few years. The globalization of businesses and worldwide Governmental/International organizations collaborations drives this interest in the national culture to increase more and more. Based on Hofstede national culture framework, the study hypothesizes and tests the impact of the culture and development of the country on the PM deployment. Seventy-four countries are selected to conduct a correlation and regression analysis between Hofstede’s national culture dimensions and the used PM deployment indicator. The results show the relations between various national culture dimensions and development indicator (GDP/Capita) on the project management deployment levels of the considered countries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The current paper compares and investigates the discrepancies in motivational drives of project team members with respect to their project environment in collocated and distributed (virtual) project teams. The set of factors, which in this context are called ‘Sense of Ownership’, is used as a scale to measure these discrepancies using one tailed t tests. These factors are abstracted from theories of motivation, team performance, and team effectiveness and are related to ‘Nature of Work’, ‘Rewards’, and ‘Communication’. It has been observed that ‘virtual ness’ does not seem to impact the motivational drives of the project team members or the way the project environments provide or support those motivational drives in collocated and distributed projects. At a more specific level in terms of the motivational drives of the project team (‘WANT’) and the ability of the project environment to provide or support those factors (‘GET’), in collocated project teams, significant discrepancies were observed with respect to financial and non financial rewards, learning opportunities, nature of work and project specific communication, while in distributed teams, significant discrepancies with respect to project centric communication, followed by financial rewards and nature of work. Further, distributed project environments seem to better support the team member motivation than collocated project environments. The study concludes that both the collocated and distributed project environments may not be adequately supporting the motivational drives of its project team members, which may be frustrating to them. However, members working in virtual team environments may be less frustrated than their collocated counterparts as virtual project environments are better aligned with the motivational drives of their team members vis-à-vis the collocated project environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The current paper compares and investigates the discrepancies in motivational drives of project team members with respect to their project environment in collocated and distributed (virtual) project teams. The set of factors, which in this context are called ‘Sense of Ownership’, is used as a scale to measure these discrepancies using one tailed t tests. These factors are abstracted from theories of motivation, team performance, and team effectiveness and are related to ‘Nature of Work’, ‘Rewards’, and ‘Communication’. It has been observed that ‘virtualness’ does not seem to impact the motivational drives of the project team members or the way the project environments provide or support those motivational drives in collocated and distributed projects. At a more specific level in terms of the motivational drives of the project team (‘WANT’) and the ability of the project environment to provide or support those factors (‘GET’), in collocated project teams, significant discrepancies were observed with respect to financial and non financial rewards, learning opportunities, nature of work and project specific communication, while in distributed teams, significant discrepancies with respect to project centric communication, followed by financial rewards and nature of work. Further, distributed project environments seem to better support the team member motivation than collocated project environments. The study concludes that both the collocated and distributed project environments may not be adequately supporting the motivational drives of its project team members, which may be frustrating to them. However, members working in virtual team environments may be less frustrated than their collocated counterparts as virtual project environments are better aligned with the motivational drives of their team members vis-à-vis the collocated project environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates the role of cultural factors as possible partial explanation of the disparity in terms of Project Management Deployment observed between various studied countries. The topic of culture has received increasing attention in the management literature in general during the last decades and in the Project Management literature in particular during the last few years. The globalization of businesses and worldwide Governmental / International organizations collaborations drives this interest in the national culture to increase more and more. Based on Hofstede national culture framework, the study hypothesizes and tests the impact of the culture and development of the country on the PM Deployment. 74 countries are selected to conduct a correlation and regression analysis between Hofstede’s national culture dimensions and the used PM Deployment indicator. The results show the relations between various national culture dimensions and development indicator (GDP/Capita) on the Project Management Deployment levels of the considered countries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The opening phrase of the title is from Charles Darwin’s notebooks (Schweber 1977). It is a double reminder, firstly that mainstream evolutionary theory is not just about describing nature but is particularly looking for mechanisms or ‘causes’, and secondly, that there will usually be several causes affecting any particular outcome. The second part of the title is our concern at the almost universal rejection of the idea that biological mechanisms are sufficient for macroevolutionary changes, thus rejecting a cornerstone of Darwinian evolutionary theory. Our primary aim here is to consider ways of making it easier to develop and to test hypotheses about evolution. Formalizing hypotheses can help generate tests. In an absolute sense, some of the discussion by scientists about evolution is little better than the lack of reasoning used by those advocating intelligent design. Our discussion here is in a Popperian framework where science is defined by that area of study where it is possible, in principle, to find evidence against hypotheses – they are in principle falsifiable. However, with time, the boundaries of science keep expanding. In the past, some aspects of evolution were outside the current boundaries of falsifiable science, but increasingly new techniques and ideas are expanding the boundaries of science and it is appropriate to re-examine some topics. It often appears that over the last few decades there has been an increasingly strong assumption to look first (and only) for a physical cause. This decision is virtually never formally discussed, just an assumption is made that some physical factor ‘drives’ evolution. It is necessary to examine our assumptions much more carefully. What is meant by physical factors ‘driving’ evolution, or what is an ‘explosive radiation’. Our discussion focuses on two of the six mass extinctions, the fifth being events in the Late Cretaceous, and the sixth starting at least 50,000 years ago (and is ongoing). Cretaceous/Tertiary boundary; the rise of birds and mammals. We have had a long-term interest (Cooper and Penny 1997) in designing tests to help evaluate whether the processes of microevolution are sufficient to explain macroevolution. The real challenge is to formulate hypotheses in a testable way. For example the numbers of lineages of birds and mammals that survive from the Cretaceous to the present is one test. Our first estimate was 22 for birds, and current work is tending to increase this value. This still does not consider lineages that survived into the Tertiary, and then went extinct later. Our initial suggestion was probably too narrow in that it lumped four models from Penny and Phillips (2004) into one model. This reduction is too simplistic in that we need to know about survival and ecological and morphological divergences during the Late Cretaceous, and whether Crown groups of avian or mammalian orders may have existed back into the Cretaceous. More recently (Penny and Phillips 2004) we have formalized hypotheses about dinosaurs and pterosaurs, with the prediction that interactions between mammals (and groundfeeding birds) and dinosaurs would be most likely to affect the smallest dinosaurs, and similarly interactions between birds and pterosaurs would particularly affect the smaller pterosaurs. There is now evidence for both classes of interactions, with the smallest dinosaurs and pterosaurs declining first, as predicted. Thus, testable models are now possible. Mass extinction number six: human impacts. On a broad scale, there is a good correlation between time of human arrival, and increased extinctions (Hurles et al. 2003; Martin 2005; Figure 1). However, it is necessary to distinguish different time scales (Penny 2005) and on a finer scale there are still large numbers of possibilities. In Hurles et al. (2003) we mentioned habitat modification (including the use of Geogenes III July 2006 31 fire), introduced plants and animals (including kiore) in addition to direct predation (the ‘overkill’ hypothesis). We need also to consider prey switching that occurs in early human societies, as evidenced by the results of Wragg (1995) on the middens of different ages on Henderson Island in the Pitcairn group. In addition, the presence of human-wary or humanadapted animals will affect the distribution in the subfossil record. A better understanding of human impacts world-wide, in conjunction with pre-scientific knowledge will make it easier to discuss the issues by removing ‘blame’. While continued spontaneous generation was accepted universally, there was the expectation that animals continued to reappear. New Zealand is one of the very best locations in the world to study many of these issues. Apart from the marine fossil record, some human impact events are extremely recent and the remains less disrupted by time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Building Web 2.0 sites does not necessarily ensure the success of the site. We aim to better understand what improves the success of a site by drawing insight from biologically inspired design patterns. Web 2.0 sites provide a mechanism for human interaction enabling powerful intercommunication between massive volumes of users. Early Web 2.0 site providers that were previously dominant are being succeeded by newer sites providing innovative social interaction mechanisms. Understanding what site traits contribute to this success drives research into Web sites mechanics using models to describe the associated social networking behaviour. Some of these models attempt to show how the volume of users provides a self-organising and self-contextualisation of content. One model describing coordinated environments is called stigmergy, a term originally describing coordinated insect behavior. This paper explores how exploiting stigmergy can provide a valuable mechanism for identifying and analysing online user behavior specifically when considering that user freedom of choice is restricted by the provided web site functionality. This will aid our building better collaborative Web sites improving the collaborative processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report maps the current state of entrepreneurship in Australia using data from the Global Entrepreneurship Monitor (GEM) for the year 2011. Entrepreneurship is regarded as a crucial driver for economic well-being. Entrepreneurial activity in new and established firms drives innovation and creates jobs. Entrepreneurs also fuel competition thereby contributing indirectly to market and productivity growth along with improving competitiveness of the national economy. Given the economic landscape that exists as a result of the global financial crisis (GFC), it is probably more important than ever for us to understand the effects and drivers of entrepreneurial activity and attitudes in Australia. The central finding of this report is that entrepreneurship is certainly alive and well in Australia. With 10.5 per cent of the adult population involved in setting up a new business or owning a newly founded business as measured by the total entrepreneurial activity rate (TEA) in 2011, Australia ranks second only to the United States among the innovation-driven (developed) economies. Compared with 2010 the TEA rate has increased by 2.7 percentage points. Furthermore, in regard to employee entrepreneurial activity (EEA) rate in established firms, Australia ranks above average. According to GEM data, 5 per cent of the adult population is engaged in developing or launching new products, a new business unit or subsidiary for their employer. Further analysis of the GEM data also clearly shows that Australia compares well with other major economies in terms of the ‘quality’ of entrepreneurial activities being pursued. Indeed, it is not only the quantity of entrepreneurs but also the level of their aspirations and business goals that are important drivers for economic growth. On average, for each business started in Australia driven by the lack of alternatives for the founder to generate income from any other source, there are five other businesses started where the founders specifically want to take advantage of a business opportunity that they believe will increase their personal income or independence. With respect to innovativeness, 31 per cent of Australian new businesses offer products or services which they consider to be new to customers or where very few, or in some cases no, other businesses offer the same product or service. Both these indicators are higher than the average for innovation-driven economies. Somewhat below average is the international orientation of Australian entrepreneurs whereby only 12 per cent aim at having a substantial share of customers from international markets. So what drives this high quantity and quality of entrepreneurship in Australia? The analysis of the data suggests it is a combination of both business opportunities and entrepreneurial skills. It seems that around 50 per cent of the Australian population identify opportunities for a start-up venture and believe that they have the necessary skills to start a business. Furthermore, a large majority of the Australian population report that high media attention for entrepreneurship provides successful role models for prospective entrepreneurs. As a result, 12 per cent of our respondents have expressed the intention to start a business within the next three years. These numbers are all well above average when compared to the other major economies. With regard to gender, the GEM survey shows a high proportion of female entrepreneurs. Approximately 8.4 per cent of adult females are actually involved in setting up a business or have recently done so. Although this female TEA rate is slightly down from 2010, Australia ranks second among the innovation-driven economies. This paints a healthy picture of access to entrepreneurial opportunities for Australian women.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Architecture Post Mortem surveys architecture’s encounter with death, decline, and ruination following late capitalism. As the world moves closer to an economic abyss that many perceive to be the death of capital, contraction and crisis are no longer mere phases of normal market fluctuations, but rather the irruption of the unconscious of ideology itself. Post mortem is that historical moment wherein architecture’s symbolic contract with capital is put on stage, naked to all. Architecture is not irrelevant to fiscal and political contagion as is commonly believed; it is the victim and penetrating analytical agent of the current crisis. As the very apparatus for modernity’s guilt and unfulfilled drives-modernity’s debt-architecture is that ideological element that functions as a master signifier of its own destruction, ordering all other signifiers and modes of signification beneath it. It is under these conditions that architecture theory has retreated to an “Alamo” of history, a final desert outpost where history has been asked to transcend itself. For architecture’s hoped-for utopia always involves an apocalypse. This timely collection of essays reformulates architecture’s relation to modernity via the operational death-drive: architecture is but a passage between life and death. This collection includes essays by Kazi K. Ashraf, David Bertolini, Simone Brott, Peggy Deamer, Didem Ekici, Paul Emmons, Donald Kunze, Todd McGowan, Gevork Hartoonian, Nadir Lahiji, Erika Naginski, and Dennis Maher. Contents: Introduction: ‘the way things are’, Donald Kunze; Driven into the public: the psychic constitution of space, Todd McGowan; Dead or alive in Joburg, Simone Brott; Building in-between the two deaths: a post mortem manifesto, Nadir Lahiji; Kant, Sade, ethics and architecture, David Bertolini; Post mortem: building deconstruction, Kazi K. Ashraf; The slow-fast architecture of love in the ruins, Donald Kunze; Progress: re-building the ruins of architecture, Gevork Hartoonian; Adrian Stokes: surface suicide, Peggy Deamer; A window to the soul: depth in the early modern section drawing, Paul Emmons; Preliminary thoughts on Piranesi and Vico, Erika Naginski; architectural asceticism and austerity, Didem Ekici; 900 miles to Paradise, and other afterlives of architecture, Dennis Maher; Index.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.