242 resultados para Transitional phenomena
Resumo:
The Centre for Subtropical Design has prepared this submission to assist the Gold Coast City Council to finalise a plan and detailed design guidelines for the Urban Plaza Zone of Surfers Paradise Foreshore Redevelopment Masterplan which will create a public open space ‘alive’ with the quality appropriate to a place which is both a local centre and an international destination. This review has been informed by the two over-arching values identified as characteristics of a subtropical place and people’s connection to it: A sense of openness and permeability, and Engagement with the natural environment. The existing qualities of the foreshore area proposed as the Urban Plaza Zone, reflect these subtropical place values, and are integral to the Surfers Paradise identity: Seamless visual and spatial access to the beach and sea, Permeable interface between beach and built zones provided by beach planting and shade to sand by Pandanus, A shade zone mediating beach and linear promenade, road and commercial zones, enabling a variety of social and visual experiences, on soft and hard finishes, and A lively, constantly moving shared road and pedestrian way catering for events and day to day activities with visual access to beach and shaded areas. The Centre for Subtropical Design commends the Gold Coast City Council on preparing a plan for a public open space that is a contemporary departure from the adhoc basis of development that has occurred, in that it will make this area more accessible. However, the proposed plan seems to be working too hard in terms of ‘program’. While providing an identifiable interruption in the linear extent of the Foreshore, the lack of continuity of design in terms of both hardscaping (such as perpendicular paving elements) and softscaping (such as tree selections) may contribute to a lack of definition for the entire Foreshore as a place that mediates, along its length, between sea and land. Providing a hard edge to a beach character of soft and planted transitional elements needs to balance the proposed visual and physical barrier with the need for perceived and actual easy access. The Surfers Paradise identity needs strengthening through attention to planting for shade, materials, particularly selection of paving colours, and stronger delineation of the linear nature of the Foreshore. The Urban Plaza zone is an appropriate interruption to the continuous planting, however the link from the commercial zone overtakes the public and beach zone. A more seamless transition from shop to sea, better integration of the roadway and pedestrian zone and improved physical transition from concrete to sand is recommended. Built form solutions must be robust and designed with the subtropical design principles and the Surfers Paradise identity as underpinning parameters for a lasting and memorable public open space.
Resumo:
Longitudinal panel studies of large, random samples of business start-ups captured at the pre-operational stage allow researchers to address core issues for entrepreneurship research, namely, the processes of creation of new business ventures as well as their antecedents and outcomes. Here, we perform a methods-orientated review of all 83 journal articles that have used this type of data set, our purpose being to assist users of current data sets as well as designers of new projects in making the best use of this innovative research approach. Our review reveals a number of methods issues that are largely particular to this type of research. We conclude that amidst exemplary contributions, much of the reviewed research has not adequately managed these methods challenges, nor has it made use of the full potential of this new research approach. Specifically, we identify and suggest remedies for context-specific and interrelated methods challenges relating to sample definition, choice of level of analysis, operationalization and conceptualization, use of longitudinal data and dealing with various types of problematic heterogeneity. In addition, we note that future research can make further strides towards full utilization of the advantages of the research approach through better matching (from either direction) between theories and the phenomena captured in the data, and by addressing some under-explored research questions for which the approach may be particularly fruitful.
Resumo:
The ability to accurately predict the remaining useful life of machine components is critical for machine continuous operation and can also improve productivity and enhance system’s safety. In condition-based maintenance (CBM), maintenance is performed based on information collected through condition monitoring and assessment of the machine health. Effective diagnostics and prognostics are important aspects of CBM for maintenance engineers to schedule a repair and to acquire replacement components before the components actually fail. Although a variety of prognostic methodologies have been reported recently, their application in industry is still relatively new and mostly focused on the prediction of specific component degradations. Furthermore, they required significant and sufficient number of fault indicators to accurately prognose the component faults. Hence, sufficient usage of health indicators in prognostics for the effective interpretation of machine degradation process is still required. Major challenges for accurate longterm prediction of remaining useful life (RUL) still remain to be addressed. Therefore, continuous development and improvement of a machine health management system and accurate long-term prediction of machine remnant life is required in real industry application. This thesis presents an integrated diagnostics and prognostics framework based on health state probability estimation for accurate and long-term prediction of machine remnant life. In the proposed model, prior empirical (historical) knowledge is embedded in the integrated diagnostics and prognostics system for classification of impending faults in machine system and accurate probability estimation of discrete degradation stages (health states). The methodology assumes that machine degradation consists of a series of degraded states (health states) which effectively represent the dynamic and stochastic process of machine failure. The estimation of discrete health state probability for the prediction of machine remnant life is performed using the ability of classification algorithms. To employ the appropriate classifier for health state probability estimation in the proposed model, comparative intelligent diagnostic tests were conducted using five different classifiers applied to the progressive fault data of three different faults in a high pressure liquefied natural gas (HP-LNG) pump. As a result of this comparison study, SVMs were employed in heath state probability estimation for the prediction of machine failure in this research. The proposed prognostic methodology has been successfully tested and validated using a number of case studies from simulation tests to real industry applications. The results from two actual failure case studies using simulations and experiments indicate that accurate estimation of health states is achievable and the proposed method provides accurate long-term prediction of machine remnant life. In addition, the results of experimental tests show that the proposed model has the capability of providing early warning of abnormal machine operating conditions by identifying the transitional states of machine fault conditions. Finally, the proposed prognostic model is validated through two industrial case studies. The optimal number of health states which can minimise the model training error without significant decrease of prediction accuracy was also examined through several health states of bearing failure. The results were very encouraging and show that the proposed prognostic model based on health state probability estimation has the potential to be used as a generic and scalable asset health estimation tool in industrial machinery.
Resumo:
A Wireless Sensor Network (WSN) is a set of sensors that are integrated with a physical environment. These sensors are small in size, and capable of sensing physical phenomena and processing them. They communicate in a multihop manner, due to a short radio range, to form an Ad Hoc network capable of reporting network activities to a data collection sink. Recent advances in WSNs have led to several new promising applications, including habitat monitoring, military target tracking, natural disaster relief, and health monitoring. The current version of sensor node, such as MICA2, uses a 16 bit, 8 MHz Texas Instruments MSP430 micro-controller with only 10 KB RAM, 128 KB program space, 512 KB external ash memory to store measurement data, and is powered by two AA batteries. Due to these unique specifications and a lack of tamper-resistant hardware, devising security protocols for WSNs is complex. Previous studies show that data transmission consumes much more energy than computation. Data aggregation can greatly help to reduce this consumption by eliminating redundant data. However, aggregators are under the threat of various types of attacks. Among them, node compromise is usually considered as one of the most challenging for the security of WSNs. In a node compromise attack, an adversary physically tampers with a node in order to extract the cryptographic secrets. This attack can be very harmful depending on the security architecture of the network. For example, when an aggregator node is compromised, it is easy for the adversary to change the aggregation result and inject false data into the WSN. The contributions of this thesis to the area of secure data aggregation are manifold. We firstly define the security for data aggregation in WSNs. In contrast with existing secure data aggregation definitions, the proposed definition covers the unique characteristics that WSNs have. Secondly, we analyze the relationship between security services and adversarial models considered in existing secure data aggregation in order to provide a general framework of required security services. Thirdly, we analyze existing cryptographic-based and reputationbased secure data aggregation schemes. This analysis covers security services provided by these schemes and their robustness against attacks. Fourthly, we propose a robust reputationbased secure data aggregation scheme for WSNs. This scheme minimizes the use of heavy cryptographic mechanisms. The security advantages provided by this scheme are realized by integrating aggregation functionalities with: (i) a reputation system, (ii) an estimation theory, and (iii) a change detection mechanism. We have shown that this addition helps defend against most of the security attacks discussed in this thesis, including the On-Off attack. Finally, we propose a secure key management scheme in order to distribute essential pairwise and group keys among the sensor nodes. The design idea of the proposed scheme is the combination between Lamport's reverse hash chain as well as the usual hash chain to provide both past and future key secrecy. The proposal avoids the delivery of the whole value of a new group key for group key update; instead only the half of the value is transmitted from the network manager to the sensor nodes. This way, the compromise of a pairwise key alone does not lead to the compromise of the group key. The new pairwise key in our scheme is determined by Diffie-Hellman based key agreement.
Resumo:
Adolescents are both aware of and have the impetuous to exploit aspects of Science, Technology, Engineering and Mathematics (STEM) within their personal lives. Whether they are surfing, cycling, skateboarding or shopping, STEM concepts impact their lives. However science, mathematics, engineering and technology are still treated in the classroom as separate fragmented entities in the educational environment where most classroom talk is seemingly incomprehensible to the adolescent senses. The aim of this study was to examine the experiences of young adolescents with the aim of transforming school learning at least of science into meaningful experiences that connected with their lives using a self-study approach. Over a 12-month period, the researcher, an experienced secondary-science teacher, designed, implemented and documented a range of pedagogical practices with his Year-7 secondary science class. Data for this case study included video recordings, journals, interviews and surveys of students. By setting an environment empathetic to adolescent needs and understandings, students were able to actively explore phenomena collaboratively through developmentally appropriate experiences. Providing a more contextually relevant environment fostered meta-cognitive practices, encouraged new learning through open dialogue, multi-modal representations and assessments that contributed to building upon, re-affirming, or challenging both the students' prior learning and the teacher’s pedagogical content knowledge. A significant outcome of this study was the transformative experiences of an insider, the teacher as researcher, whose reflections provided an authentic model for reforming pedagogy in STEM classes.
Resumo:
Explanations of the role of analogies in learning science at a cognitive level are made in terms of creating bridges between new information and students’ prior knowledge. In this empirical study of learning with analogies in an 11th grade chemistry class, we explore an alternative explanation at the "social" level where analogy shapes classroom discourse. Students in the study developed analogies within small groups and with their teacher. These classroom interactions were monitored to identify changes in discourse that took place through these activities. Beginning from socio-cultural perspectives and hybridity, we investigated classroom discourse during analogical activities. From our analyses, we theorized a merged discourse that explains how the analog discourse becomes intertwined with the target discourse generating a transitional state where meanings, signs, symbols, and practices are in flux. Three categories were developed that capture how students intertwined the analog and target discourses—merged words, merged utterances/sentences, and merged practices.
Resumo:
In most materials, short stress waves are generated during the process of plastic deformation, phase transformation, crack formation and crack growth. These phenomena are applied in acoustic emission (AE) for the detection of material defects in a wide spectrum of areas, ranging from nondestructive testing for the detection of materials defects to monitoring of microseismical activity. AE technique is also used for defect source identification and for failure detection. AE waves consist of P waves (primary longitudinal waves), S waves (shear/transverse waves) and Rayleigh (surface) waves as well as reflected and diffracted waves. The propagation of AE waves in various modes has made the determination of source location difficult. In order to use acoustic emission technique for accurate identification of source, an understanding of wave propagation of the AE signals at various locations in a plate structure is essential. Furthermore, an understanding of wave propagation can also assist in sensor location for optimum detection of AE signals along with the characteristics of the source. In real life, as the AE signals radiate from the source it will result in stress waves. Unless the type of stress wave is known, it is very difficult to locate the source when using the classical propagation velocity equations. This paper describes the simulation of AE waves to identify the source location and its characteristics in steel plate as well as the wave modes. The finite element analysis (FEA) is used for the numerical simulation of wave propagation in thin plate. By knowing the type of wave generated, it is possible to apply the appropriate wave equations to determine the location of the source. For a single plate structure, the results show that the simulation algorithm is effective to simulate different stress waves.
Resumo:
Background In order to provide insights into the complex biochemical processes inside a cell, modelling approaches must find a balance between achieving an adequate representation of the physical phenomena and keeping the associated computational cost within reasonable limits. This issue is particularly stressed when spatial inhomogeneities have a significant effect on system's behaviour. In such cases, a spatially-resolved stochastic method can better portray the biological reality, but the corresponding computer simulations can in turn be prohibitively expensive. Results We present a method that incorporates spatial information by means of tailored, probability distributed time-delays. These distributions can be directly obtained by single in silico or a suitable set of in vitro experiments and are subsequently fed into a delay stochastic simulation algorithm (DSSA), achieving a good compromise between computational costs and a much more accurate representation of spatial processes such as molecular diffusion and translocation between cell compartments. Additionally, we present a novel alternative approach based on delay differential equations (DDE) that can be used in scenarios of high molecular concentrations and low noise propagation. Conclusions Our proposed methodologies accurately capture and incorporate certain spatial processes into temporal stochastic and deterministic simulations, increasing their accuracy at low computational costs. This is of particular importance given that time spans of cellular processes are generally larger (possibly by several orders of magnitude) than those achievable by current spatially-resolved stochastic simulators. Hence, our methodology allows users to explore cellular scenarios under the effects of diffusion and stochasticity in time spans that were, until now, simply unfeasible. Our methodologies are supported by theoretical considerations on the different modelling regimes, i.e. spatial vs. delay-temporal, as indicated by the corresponding Master Equations and presented elsewhere.
Resumo:
Maximum-likelihood estimates of the parameters of stochastic differential equations are consistent and asymptotically efficient, but unfortunately difficult to obtain if a closed-form expression for the transitional probability density function of the process is not available. As a result, a large number of competing estimation procedures have been proposed. This article provides a critical evaluation of the various estimation techniques. Special attention is given to the ease of implementation and comparative performance of the procedures when estimating the parameters of the Cox–Ingersoll–Ross and Ornstein–Uhlenbeck equations respectively.
Resumo:
The kallikreins and kallikrein-related peptidases are serine proteases that control a plethora of developmental and homeostatic phenomena, ranging from semen liquefaction to skin desquamation and blood pressure. The diversity of roles played by kallikreins has stimulated considerable interest in these enzymes from the perspective of diagnostics and drug design. Kallikreins already have well-established credentials as targets for therapeutic intervention and there is increasing appreciation of their potential both as biomarkers and as targets for inhibitor design. Here, we explore the current status of naturally occurring kallikrein protease-inhibitor complexes and illustrate how this knowledge can interface with strategies for rational re-engineering of bioscaffolds and design of small-molecule inhibitors.
Resumo:
When we attempt to speak about the relationship between language, literacy, and the brain, we find ourselves ill equipped to deal with these conceptually and qualitatively different phenomena. Immediately we must straddle different academic traditions that treat each of these as separate “things”. Broadly speaking, the study of language firstly belongs to the domain of biology, then to anthropology, sociology, and linguistics. At its most functional, a study of literacy education is a study of a particular technology, its diffusion techniques, and the abilities and motivations of people to adopt, or adapt themselves to, this technology. The brain is most commonly studied in the field of neurology, which is also a sub-discipline of biology, biochemistry, and medicine.
Resumo:
Starting from a local problem with finding an archival clip on YouTube, this paper expands to consider the nature of archives in general. It considers the technological, communicative and philosophical characteristics of archives over three historical periods: 1) Modern ‘essence archives’ – museums and galleries organised around the concept of objectivity and realism; 2) Postmodern mediation archives – broadcast TV systems, which I argue were also ‘essence archives,’ albeit a transitional form; and 3) Network or ‘probability archives’ – YouTube and the internet, which are organised around the concept of probability. The paper goes on to argue the case for introducing quantum uncertainty and other aspects of probability theory into the humanities, in order to understand the way knowledge is collected, conserved, curated and communicated in the era of the internet. It is illustrated throughout by reference to the original technological 'affordance' – the Olduvai stone chopping tool.
Resumo:
Orlando (Sally Potter, 1992) is a significant filmic achievement: in only ninety minutes it offers a rich, layered, and challenging account of a life lived across four hundred years, across two sexes and genders, and across multiple countries and cultures. Already established as a feminist artist, Potter aligns herself with a genealogy of feminist art by adapting Virginia Woolf’s Orlando: A Biography (1928) to tell the story of Orlando: a British subject who must negotiate their “identity” while living a strangely long time and, also somewhat strangely, changing biological sex from male to female. Both novel and film interrogate norms of gender and culture. They each take up issues of sex, gender, and sexuality as socially-constructed phenomena rather than as “essential truths”, and Orlando’s attempts to tell his/her story and make sense of his/her life mirror readers’ attempts to understand and interpret Orlando’s journey within inherited artistic traditions.
Resumo:
One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition.
Resumo:
We study Krylov subspace methods for approximating the matrix-function vector product φ(tA)b where φ(z) = [exp(z) - 1]/z. This product arises in the numerical integration of large stiff systems of differential equations by the Exponential Euler Method, where A is the Jacobian matrix of the system. Recently, this method has found application in the simulation of transport phenomena in porous media within mathematical models of wood drying and groundwater flow. We develop an a posteriori upper bound on the Krylov subspace approximation error and provide a new interpretation of a previously published error estimate. This leads to an alternative Krylov approximation to φ(tA)b, the so-called Harmonic Ritz approximant, which we find does not exhibit oscillatory behaviour of the residual error.