68 resultados para top specification
Resumo:
‘Top Ten Box Office Blockbusters in Dollars’, is an ongoing series of works that represent the production budgets and worldwide gross profits of the top ten grossing films of all time. By displaying this data on top of the full running time of each blockbuster, the viewer’s attention is drawn back and forth between the amassing dollar figures, and the original film’s highly polished presentation. In doing so, the work aims to provide a new opportunity to enjoy these immensely popular films with a new sense of value. The exhibition was selected for the Artistic Program at MetroArts, Brisbane in 2010
Resumo:
Natural convection in a triangular enclosure subject to non-uniformly cooling at the inclined surfaces and uniformly heating at the base is investigated numerically. The numerical simulations of the unsteady flows over a range of Rayleigh numbers and aspect ratios are carried out using Finite Volume Method. Since the upper surface is cooled and the bottom surface is heated, the air flow in the enclosure is potentially unstable to Rayleigh Benard instability. It is revealed that the transient flow development in the enclosure can be classified into three distinct stages; an early stage, a transitional stage and a steady stage. It is also found that the flow inside the enclosure strongly depends on the governing parameters; Rayleigh number and aspect ratio. The asymmetric behaviour of the flow about the geometric centre line is discussed in detailed. The heat transfer through the roof and the ceiling as a form of Nusselt number is also reported in this study.
Resumo:
This article explores an important temporal aspect of the design of strategic alliances by focusing on the issue of time bounds specification. Time bounds specification refers to a choice on behalf of prospective alliance partners at the time of alliance formation to either pre-specify the duration of an alliance to a specific time window, or to keep the alliance open-ended (Reuer & Ariňo, 2007). For instance, Das (2006) mentions the example of the alliance between Telemundo Network and Mexican Argos Comunicacion (MAC). Announced in October 2000, this alliance entailed a joint production of 1200 hours of comedy, news, drama, reality and novella programs (Das, 2006). Conditioned on the projected date of completing the 1200 hours of programs, Telemundo Network and MAC pre-specified the time bounds of the alliance ex ante. Such time-bound alliances are said to be particularly prevalent in project-based industries, like movie production, construction, telecommunications and pharmaceuticals (Schwab & Miner, 2008). In many other instances, however, firms may choose to keep their alliances open-ended, not specifying a specific time bound at the time of alliance formation. The choice between designing open-ended alliances that are “built to last”, versus time bound alliances that are “meant to end” is important. Seminal works like Axelrod (1984), Heide & Miner (1992), and Parkhe (1993) demonstrated that the choice to place temporal bounds on a collaborative venture has important implications. More specifically, collaborations that have explicit, short term time bounds (i.e. what is termed a shorter “shadow of the future”) are more likely to experience opportunism (Axelrod, 1984), are more likely to focus on the immediate present (Bakker, Boros, Kenis & Oerlemans, 2012), and are less likely to develop trust (Parkhe, 1993) than alliances for which time bounds are kept indeterminate. These factors, in turn, have been shown to have important implications for the performance of alliances (e.g. Kale, Singh & Perlmutter, 2000). Thus, there seems to be a strong incentive for organizations to form open-ended strategic alliances. And yet, Reuer & Ariňo (2007), one of few empirical studies that details the prevalence of time-bound and open-ended strategic alliances, found that about half (47%) of the alliances in their sample were time bound, the other half were open-ended. What conditions, then, determine this choice?
Resumo:
Educators are faced with many challenging questions in designing an effective curriculum. What prerequisite knowledge do students have before commencing a new subject? At what level of mastery? What is the spread of capabilities between bare-passing students vs. the top performing group? How does the intended learning specification compare to student performance at the end of a subject? In this paper we present a conceptual model that helps in answering some of these questions. It has the following main capabilities: capturing the learning specification in terms of syllabus topics and outcomes; capturing mastery levels to model progression; capturing the minimal vs. aspirational learning design; capturing confidence and reliability metrics for each of these mappings; and finally, comparing and reflecting on the learning specification against actual student performance. We present a web-based implementation of the model, and validate it by mapping the final exams from four programming subjects against the ACM/IEEE CS2013 topics and outcomes, using Bloom's Taxonomy as the mastery scale. We then import the itemised exam grades from 632 students across the four subjects and compare the demonstrated student performance against the expected learning for each of these. Key contributions of this work are the validated conceptual model for capturing and comparing expected learning vs. demonstrated performance, and a web-based implementation of this model, which is made freely available online as a community resource.
Resumo:
Polymerase chain reaction (PCR) was developed for the detection of Banana bunchy top virus (BBTV) at maximum after 210 min and at minimum after 90 min using Pc-1 and Pc-2, respectively. PCR detection of BBTV in crude sap indicated that the freezing of banana tissue in liquid nitrogen (LN2) before extraction was more effective than using sand as the extraction technique. BBTV was also detected using PCR assay in 69 healthy and diseased plants using Na-PO4 buffer containing 1 % SDS. PCR detection of BBTV in nucleic acid extracts using seven different extraction buffers to adapt the use of PCR in routine detection in the field was studied. Results proved that BBTV was detected with high sensitivity in nucleic acid extracts more than in infectious sap. The results also suggested the common aetiology for the BBTV by the PCR reactions of BBTV in nucleic acid extracts from Australia, Burundi, Egypt, France, Gabon, Philippines and Taiwan. Results also proved a positive relation between the Egyptian-BBTV isolate and abaca bunchy top isolate from the Philippines, but there no relation was found with the Cucumber mosaic cucumovirus (CMV) isolates from Egypt and Philippines and Banana bract mosaic virus (BBMV) were found.
Resumo:
Enterprise Systems (ES) can be understood as the de facto standard for holistic operational and managerial support within an organization. Most commonly ES are offered as commercial off-the-shelf packages, requiring customization in the user organization. This process is a complex and resource-intensive task, which often prevents small and midsize enterprises (SME) from undertaking configuration projects. Especially in the SME market independent software vendors provide pre-configured ES for a small customer base. The problem of ES configuration is shifted from the customer to the vendor, but remains critical. We argue that the yet unexplored link between process configuration and business document configuration must be closer examined as both types of configuration are closely tied to one another.
Resumo:
This book provides a general framework for specifying, estimating, and testing time series econometric models. Special emphasis is given to estimation by maximum likelihood, but other methods are also discussed, including quasi-maximum likelihood estimation, generalized method of moments estimation, nonparametric estimation, and estimation by simulation. An important advantage of adopting the principle of maximum likelihood as the unifying framework for the book is that many of the estimators and test statistics proposed in econometrics can be derived within a likelihood framework, thereby providing a coherent vehicle for understanding their properties and interrelationships. In contrast to many existing econometric textbooks, which deal mainly with the theoretical properties of estimators and test statistics through a theorem-proof presentation, this book squarely addresses implementation to provide direct conduits between the theory and applied work.
Resumo:
In early April 1998, the Centre for Disease Control in Darwin was notified of a possible case of dengue which appeared to have been acquired in the Northern Territory. Because dengue is not endemic to the Northern Territory, locally acquired infection has significant public health implications, particularly for vector identification and control to limit the spread of infection. Dengue IgM serology was positive on two occasions, but the illness was eventually presumptively identified as Kokobera infection. This case illustrates the complexity of interpreting flavivirus serology. Determining the cause of infection requires consideration of the clinical illness, the incubation period, the laboratory results and vector presence. Waiting for confirmation of results, before the institution of the public health measures necessary for a true case of dengue, was ultimately justified in this case. This is a valid approach in the Northern Territory, but may not be applicable to areas of Australia with established vectors for dengue. Commun Dis Intell 1998;22:105-107.
Resumo:
In early April 1998 the Centre for Disease Control (CDC) in Darwin was notified of a case with positive dengue serology. The illness appeared to have been acquired in the Northern Territory (NT). Because dengue is not endemic to the NT, locally acquired infection has significant public health implications, particularly for vector identification and control to limit the spread of infection. Dengue IgM serology was positive on two occasions but the illness was eventually presumptively identified as Kokobera infection. This case illustrates some important points about serology. The interpretation of flavivirus serology is complex and can be misleading, despite recent improvements. The best method of determining the cause of infection is still attempting to reconcile clinical illness details with incubation times and vector presence, as well as laboratory results. This approach ultimately justified the initial period of waiting for confirmatory results in this case, before the institution of public health measures necessary for a true case of dengue.
Resumo:
Early determination of immune status is essential for the prevention and/or amelioration of disease following exposure to chickenpox. This is of particular significance for pregnant women because of the additional risks to the foetus or newborn.1 To determine the usefulness of a self-reported history of chickenpox in adult women in the Top End, we compared it with serological evidence of immunity.
Resumo:
In this paper we present a novel place recognition algorithm inspired by recent discoveries in human visual neuroscience. The algorithm combines intolerant but fast low resolution whole image matching with highly tolerant, sub-image patch matching processes. The approach does not require prior training and works on single images (although we use a cohort normalization score to exploit temporal frame information), alleviating the need for either a velocity signal or image sequence, differentiating it from current state of the art methods. We demonstrate the algorithm on the challenging Alderley sunny day – rainy night dataset, which has only been previously solved by integrating over 320 frame long image sequences. The system is able to achieve 21.24% recall at 100% precision, matching drastically different day and night-time images of places while successfully rejecting match hypotheses between highly aliased images of different places. The results provide a new benchmark for single image, condition-invariant place recognition.
Resumo:
This paper describes a new method of indexing and searching large binary signature collections to efficiently find similar signatures, addressing the scalability problem in signature search. Signatures offer efficient computation with acceptable measure of similarity in numerous applications. However, performing a complete search with a given search argument (a signature) requires a Hamming distance calculation against every signature in the collection. This quickly becomes excessive when dealing with large collections, presenting issues of scalability that limit their applicability. Our method efficiently finds similar signatures in very large collections, trading memory use and precision for greatly improved search speed. Experimental results demonstrate that our approach is capable of finding a set of nearest signatures to a given search argument with a high degree of speed and fidelity.
Resumo:
The introduction of safety technologies into complex socio-technical systems requires an integrated and holistic approach to HF and engineering, considering the effects of failures not only within system boundaries, but also at the interfaces with other systems and humans. Level crossing warning devices are examples of such systems where technically safe states within the system boundary can influence road user performance, giving rise to other hazards that degrade safety of the system. Chris will discuss the challenges that have been encountered to date in developing a safety argument in support of low-cost level crossing warning devices. The design and failure modes of level crossing warning devices are known to have a significant influence on road user performance; however, quantifying this effect is one of the ongoing challenges in determining appropriate reliability and availability targets for low-cost level crossing warning devices.